Hacker News new | past | comments | ask | show | jobs | submit login
Alan Donovan and Brian Kernighan Answer Questions on Go (slashdot.org)
159 points by signa11 on Nov 20, 2015 | hide | past | favorite | 106 comments



Go is as far removed from "extreme abstraction" as any other language I know. I don't know how it can even for a second be considered to have that desireable attribute.

I, for one, don't like abstraction and power just because I'm a PL geek. I admit those are also reasons, but the truth is I use them to make my codebase smaller, simpler, and easier to reason about.

For some, achieving that goal means writing for loops. For me, it's never having to write explicit loops.


Agreed. There's abstraction and then there's abstraction. I might have been in the camp of Go enthusiasts if my experience with abstraction only included C++ or Java. But once I discovered Standard ML and OCaml I found out how wonderful abstraction can be.


I'm not sure many people are arguing that abstraction isn't good for those who are writing the code--that's the whole point of abstraction after all, to make the developer's job easier.

The problems tend to crop up for other people trying to decipher and read it. Sometimes also for the machine trying to execute it.


It would be a shame if you are only making some things easier to reason about instead of everything.


>> ..., and extreme abstraction capabilities in high level languages like Common Lisp, ...

> Go has everything you mention in both of your lists of desirable attributes (depending perhaps on what you mean by "extreme abstraction")

"perhaps"? By whose definition would it ever even come close to "extreme abstraction"? All of us assembly programmers, maybe?

Don't get me wrong, I use Go where it makes sense. It has pros and cons. I'm literally programming Go right now. But "extreme abstraction"... you need more than "perhaps" to qualify that.

Or was that his point and did I just completely miss the joke? Actually, that's probably it.


You missed what Donovan said, which answered the question. Obviously PL-enthusiasts (give me all the shiny! all the time!) wont like the answer though.


I don't really understand the claim that generic programming is "shiny". It is an old idea that has been implemented many times. It's totally fine that the Go team left out generics – it's their decision – but denigrating people who lament that decision as shiny-chasing PL-enthusiasts is unfair.


Absolutely agree with what he said, and I agree it answers the question. And I agree with what you said, it's not a great answer for everybody, but it's an answer.

Yet still, Kernighan said what he said.


Kernighan's response to the C question was rather odd -- where is this myth that C is only used in embedded systems and drivers coming from?

The Linux kernel is still written in C.

Ben Klemens, author of 21st Century C, leads the statistical computing group for the research arm of the U.S. Census Bureau. He models complex systems and computations in C.

The entire Python scientific computing stack is resting on C.

A huge portion of the GHC compiler is written in C.

Many AAA game developer studios are writing their engines in C (with vendor supplied C++ compilers, but C none-the-less).

And do people really write C89 for greenfield projects today? C99 is pretty amazing and has added some great features to the language. C11 might be the trivial change having only added some atomic operators (depending on whether you think this is trivial) and rescinding VLAs (though most compilers will probably continue to support them anyway).

I think Go is a perfectly fine language but it seems like perhaps the group is a little out of touch with the rest of the world judging by replies to questions like this.


Kernighan wasn't the one claiming that C was mostly used for embedded systems and drivers; that was the guy who posed the question. Kernighan only agreed that C is still popular in that space.

As for C99, I would love to see some hard stats about rate of adoption. Personally I can't imagine starting out on a new project and not use C99, but then the hard-core C contingency is a pretty conservative bunch. (Are there still systems where you can't get a C99 compiler?)


> Are there still systems where you can't get a C99 compiler?

In windows it's pretty annoying to get a C99 compiler. The "standard" compiler to do Windows development is Microsoft's, and it doesn't support C99.

Of course you can install MinGW (gcc); most open source software on Windows uses it, but it's very unusual for the average Windows developer to even have gcc, and completely unheard of in the commercial space.

I will also note that installing MinGW in Windows is a pretty painful experience as well, which doesn't help.

For most Windows developers, if it doesn't ship in Visual Studio, it doesn't exist.


That's insane. Though I suspect it's because C++ has so much more traction in the Windows world, and that seasoned C developers forced to work on the Windows platform will immediately reach for MinGW or Cygwin. Last I checked, a lot of Unix software has to be compiled with GCC, anyway.


Don't forget about the Intel C++ Compiler and Clang, never mind more obscure alternatives like TinyCC, Pelles C and Comeau C/C++.

There are options.

Also note that Microsoft has finally started improving C99 support with Visual Studio 2013 and 2015.


> Also note that Microsoft has finally started improving C99 support with Visual Studio 2013 and 2015.

Only as far as C++ standard requires C compatibility.

For Microsoft the future for native programming on Windows are C++ and .NET Native.

For compatibility with open source world and ISV that still care about C, the answer is the integration of Clang frontend with Visual C++'s backend, that is coming with Visual Studio 2015 Update 1.


Visual studio 2013 supports enough C99 that you can get by (VLAs are the main omission). Visual Studio 2015 is supposedly fully conforming.


Too late to edit, but, looking through the archives, it looks as if I might be incorrect about VS2015: https://news.ycombinator.com/item?id=10278704

Nevertheless, I stand by my comments re VS2013+. If you're accustomed to working on projects that support varied platform/compiler/libc combinations, but you've still specifically found VS2010 (and earlier) particularly difficult to work with due to the insistence on C89, you'll probably find VS2013 (and later) substantially more to your taste, and by quite some margin.

(That's certainly been my experience anyway. After some initial effort to work around library differences, my C99 code builds with gcc/clang/VS2013 - and does so, ongoing, with very little effort.)


You're right -- Kernighan didn't say that and I didn't intend to imply that he had. Apologies for those confused.

The rate of adoption question is interesting. If only there were a foundation or group out there shepherding the community and keeping tabs on such developments. If there is I'd appreciate being pointed to the right address.


> Are there still systems where you can't get a C99 compiler?

- Commercial compilers for embedded space.

- Security areas where compilers are certified.


The trend in games these days is to move to C++ but there is still C used in some places.


Also of course Python itself is in C as well. Other language VM/runtimes are developed in C as well (Erlang, Lua?)


> The entire Python scientific computing stack is resting on C.

Aren't a lot of NumPy modules reliant on fortran libraries?


> A huge portion of the GHC compiler is written in C.

Is it? I thought it was written in Haskell. Or are you referring to the GHC runtime?


I'm not against downvotes but I'm curious what in this comment is attracting them? Would help me be more insightful in the future.


I never knew that the decision to not have versioning built into go was because of the diamond dependency problem. This makes a lot of sense now that I think about it. I wonder what alternatives to manually versioning things the go team is considering.

Also, at the beginning the Go team wasn't all that enthusiastic about an IDE, but its great to see that Go will get an IntelliJ grade IDE soon, so that works for me :)


I really am not a fan of the lack of versioning, even with that explanation. Of course diamond dependencies are a problem. But they haven't actually fixed that problem; they've just declared a method for picking one of the two dependencies that is ill defined ("whichever one you happen to have pulled in first"). At least with versioning I get told that there's an incompatibility in the expectations between two libraries.

With Go's removal of version numbers, they haven't removed any of the complexity of resolving incompatibilities, they've just hidden the fact that there is one. I would have much preferred a strategy of maintaining version numbers, and resolving them via "either you specify what version you want pulled in, or we'll build with the most recent and throw a warning".


There are fixes for the diamond dependency problem out there, it's not like this is how to build a warp drive. For instance UNIXes have had symbol versioning since forever, they have RTLD_DEEPBIND in glibc, etc. Java is currently noodling around this problem with Jigsaw (though there's already a solution in OSGi) and it looks like for now they'll punt on solving it themselves, but provide the APIs needed for a simple container to solve it instead.

It's fine to not ship a solution in version one. But to use it as a reason to not have version numbers at all seems like heading in the wrong direction (backwards).


I think that the solution to this problem is one of the few things that nodejs got right. Instead of declaring a global dependency on `require("D")`, you assign it a variable name. Then, each package can declare its sub-dependencies, and even when the versions differ, the different versions are kept in a different scope.

https://nodejs.org/api/modules.html

  // Package A:
  import B from "B"; // A/node_modules/B/1.0.0
  import C from "C"; // A/node_modules/C/1.0.0

  // Package B:
  import D from "D"; // A/node_modules/B/1.0.0/node_modules/D/1.0.0


  // Package C:
  import D from "D"; // A/node_modules/C/1.0.0/node_modules/D/2.0.0


I'm convinced this is the right approach: dependencies should be purely local to a module. The tooling should make it easy to detect you have a diamond, but let each module get what it needs. As generally everyone moves towards more continuous update and integration, at the end of the day it's all gonna be sha's anyhow.


But what do you do in a typed language? For example, suppose you have modules A and B, which depend on module C. Let's also say that module A has a function foo that return a value of type C.t, and module B has a function bar that accepts a value of type C.t.

Now let's say that module A is depending on version 1.0.0 of module C, and module B is depending on version 1.5.2 of module C.

Does B.bar(A.foo()) still work? Is type t the same in C version 1.0.0 and 1.5.2?


I think the pragmatic approach is to just let that be a type analysis failure and force the user to deal with it. We don't need the module system to be automatically perfect in every imaginable extreme, we need it to be fundamentally safe, and then do sane useful things in the situations that are safe.


Good point. But if B.bar accepts an interface that C.t implements, then everything should work just fine and that should be the way to go on public APIs.


But what if the interface is defined in C? The interface has to live somewhere, and wouldn't it be versioned too?


I think that this is a good approach - even better would be to include the version specifiers in the code itself...

    B = import("b", ">1.0")
    C = import("c", ">1.1 && <2.0")
etc.


I have never really thought about this problem before, but would it not be possible to:

1. Treat each version of a package as a separate package. If there are two versions required, they are both included.

2. At compile time transform package names to include the desired version number.

I may be completely misunderstanding or over-simplifying this problem.


For some things, yes. But there are a lot of potentially tricksy issues in doing that.

Imagine a queueing library, for instance. You push something onto a queue created with library A, and then you want to read from library B (A and B are different version of the same library). Just about anything could happen (depending on how the library/language are implemented). You might get an error due to library B reading a data structure formatted for library A. Or you might get nothing back, because library B is checking for the queue in its own registry, while it exists in library A's. Etc.

If there is no library-dependent state between calls, or the calls are not being mixed between libraries, you can totally do that, though. A JSON serialization library can totally be supported in that manner, for instance. The queueing library example, if it's only being used to support other libraries or separate functionalities, such that you never have to push onto a queue in library A, and expect to be able to pop it off from a queue in library B, would work, too. But in that latter case, that's not something the library writer can guarantee, it's all about your own usage. So there's a lot of potential issues.


Making the version part of the type generally fixes these issues. Anything that could go wrong will nearly always go wrong at compile time.


Sure, that fixes some problems, but I don't understand how that would solve the first example listed above. Can you explain how the compiler would prevent this runtime error?

  // package a
  lib-v16.Queue(int32) // Stores in queue for version 16 of lib
  // package b
  int32 = lib-v18.DeQueue() // Loads from queue of version 18 of lib, which is empty
And even if we accept that you shouldn't have state in the package and the queue should be passed around, now we just have two packages (a & b) that use incompatible versions of lib for potentially no reason.


That is a theoretical problem, yes. In practice I've never seen it be an issue, because (a) global state is frowned upon and (b) packages that depend on each other so tightly tend to have dependencies at the type level, making the probability that such errors slip past the compiler very low.

To expand on (b), suppose that packages A and B depend on a shared queue managed by package C, as in your example. Because A and B have such tightly coupled logic, chances are that some package--either one of A and B, or some other package that depends on A and B--will have code that expects a type from "A's C" to be equivalent to a type from "B's C". At that point the code will fail to compile, and the programmer will fix the problem before it ever hits production. (Fixing this is easy because you can just inspect the lockfile to find all the crate versions in use.) Basically, static typing makes it such that, while the problem exists in theory, in practice its probability is so low as to not be a concern.


It catches the error, but doesn't solve the problem, because now you can't write the program you intended to write.


Sure you can. Just run cargo update [-p] to get both of your dependencies on the same version and then recompile.

This won't work if the dependencies are truly depending on incompatible-at-the-API-level versions of the same library, of course. But there's no magic bullet solution to that—not handling versioning in the language/package manager doesn't make that problem any easier.


Ah, so we're on the same page :). I definitely wasn't claiming Go's (non-)solution was the way to go, just that importing multiple versions of the same library doesn't solve the problem either. I definitely agree with you that this (Rust/Cargo's) is the best solution I've seen so far.


Yes, that is essentially what systems like Rust/Cargo do. It works great.


The fix for Google is to check in all third party libraries locally, so you get the same version as everyone else in the company until someone decides to check in an upgraded version. It works fine and anyone else is free to do the same.

But that requires a monorepo that scales and people to test and check in the upgrades, so it doesn't work well for open source development, which is why people are working on alternatives.


Right, but you could do that with version numbers, too. The difference between updating from "whatever version was pulled in at the time we first set up the repo with this library" and "current" is no different than updating from "version X" to "version Y", except that you don't know what the differences are, and there's no way to determine that it would be easier (and possible) to update to version X.1, since that includes the fix/feature that triggered the desire for an update, and doesn't break as many things, so requires less work.


Yes, if you're going to check stuff in locally, you need to document which version you pulled to make the next upgrade easier. So it's not like nobody is keeping track of versions (or better yet, git commits).

But this can all be done using a separate system. The build system itself doesn't need to know anything about versions. If you look at Bazel, it doesn't let you specify versions for target deps. Only the developer actually upgrading a third-party library needs to deal with open source version numbers, and everyone else can build on their work.


"You'll need to document which version you pulled" - so now you're tasked with describing "I got this version" (be that git commit or whatever), rather than it being explicit as part of the package "This is (x) version". So you get to reimplement something yourself, in a non-portable, non-self-describing way. Such an improvement.

Of course it could be done in a separate system. It could be done and tracked in a separate system even with version numbers. Go has just chosen to -require- you running your own package manager to make things stable, whereas for most languages it's optional and you can create a repeatable, portable build without it.


No, it doesn't really make much sense.

The diamond dependency problem only happens when two indirect dependencies are identical but on different versions and that these two versions are incompatible. That's two big if's. 99% of the time, it's not a problem.

Notice first that this is a pretty rare problem since libraries tend to be backward compatible these days. And if you happen to come across a bad behaved one, you simply exclude the one you don't want from the graph, problem solved.

This problem has been fixed in Maven for almost ten years now, the only reason why you would not implement this crucial feature in a version manager is laziness.


Everyone seems to overlook this, but Go has had LiteIDE for a very long time and it keeps getting better.


I thought the same. I use it every day and its great. I dont understand why it doesn't get more traction.


Because visually, it's reminiscent of Eclipse 1.0.

For better or worse, a sizable proportion of developers choose IDEs on the basis of looks.


Heh, I choose IDEs based on performance, not looks, and liteide is blazing fast compared to anything else I've used.


I'm not sure if the Intellij Go Plugin is going to be made into a separate IDE, but it's already available for their other IDEs as a plugin and they have a team improving it constantly. Notably, it also works with their open-source IDEA Community Edition.

https://github.com/go-lang-plugin-org/go-lang-idea-plugin

So far the best "Go IDE", I've used.


Here's a solution to the diamond dependency problem: Imports can only be done for a specific lexical scope. Also, imports must specify a particular version. Lastly, calls to a package go to a specific version of the package, and types are defined in a specific version. (This is why all imports must be done for a specific lexical scope.)

The same scheme could be used to resolve diamond dependencies in multiple inheritance as well.


That's essentially the solution Rust and Cargo adopt. It occasionally causes strange errors like "expected Foo, found Foo" (where the first Foo was, say, Foo version 1.1 and the second was Foo version 1.2), but the compiler now detects this situation and tries to explain what's going on.

I'm certain I'm biased, but in my experience it's been the most robust way to handle versioning that I've dealt with.


I proposed this in a Smalltalk newsgroup over a decade ago. The reaction? I was ridiculed.

Imports naming specific versions would also allow better integration of code/build into a version control system. Actually, the entire OS should be built with version control in mind from the ground up.


No truly good idea will ever escape ridicule.


> the compiler now detects this situation and tries to explain what's going on.

Oh really? That's great news! I remember having that problem earlier this year and thinking that it wasn't the greatest from a usability standpoint.


The recently open sourced VSCode also has Go support now. Good first impression. The docs and type inference stuff are more convenient compared to the oracle plugin + GoSublime for sublime it seems.


From trying it out, what does this new support have that LiteIDE does not?


I'm been using LideIDE for a while and it is pretty good. Only thing I've a problem with is configuring build tags. For portions of the code disabled by build tags, it doesn't let me search for variable and function declarations. I wish there was a way to toggle that. Another missing feature is "jump to line".


I forced myself to use it all day yesterday and today so far (most of the time, I was writing Go).

I am loving it, honestly. Microsoft has a great product here.


I thought it was just syntax highlighting. Any link documenting VSCode's Go suppport?



Jump to 2:15, great to see!


Thanks for the link. Great demo!



Thanks.


Remember to install the dlv (which requires fiddling with certs on OS X) if you want to launch and debug programs. If you don't, Code won't complain, only silently fail to launch the program.


"The reason it took God only six days to create the universe is that he didn't have to deal with the embedded base."


Nice interview. Plenty of good details. I mean, there was this slip:

"The languages... are either long gone (PL/1) "

http://www-03.ibm.com/software/products/en/plicompfami

http://www.fujitsu.com/fts/products/computing/servers/mainfr...

http://www.iron-spring.com/about.html

Next he's going to be telling you COBOL is gone, too. Big time writers' definition of dead/gone in IT has always seemed different than general usage. ;)


This us utterly hypnotic. Nice work.


> In general, Go strongly encourages being explicit about errors.

That's completely incorrect: it's trivial (and common) to just ignore errors in Go:

    ok, _ := Foo()
Checked exceptions don't let you get away with this kind of sloppy programming.


It's also trivial to "just ignore errors" in a language based on checked exceptions:

    try {
        ok = foo()
    } catch (Exception e) {
        // Do nothing
    }
Checked exceptions don't save you from "this kind of sloppy programming".


But you have to write the `catch` (or a `throws`). That's the point.

Now if you choose to handle this stupidly, that's entirely your fault, but at least the compiler did its jobs by forcing you to think about the error case.

In Go, the compiler doesn't enforce anything.


How many lines of Go have you written before writing this?

In Go, as in Java with checked exceptions, the compiler forces the programmer to handle the error.

Using _ in Go is not idiomatic. It's the exact equivalent of using a try/catch block with nothing in the catch block in Java.


Or not checking for the exception at all. Certain classes of errors can and should be allowed to pass up the call stack and cause your program to die (something strange that I wasn't expecting and don't know how to handle happened, and now I'm in a bad state - safer to exit)


And it'll happen frequently, because if you're just trying to test out `ok = foo()` and you're not sure if it'll work, then you've got to write four lines of awful boilerplate and go read an API reference just to get it to compile. Checked exceptions are supposed to encourage correctness, but there's no fudge room for playing around, so they are very not fun.

As an aside, I once tried using a crypto library to do something fairly simple (encrypt a file I'm writing to disk), and I had to handle a whole stack of exceptions that I had no clue how to handle. So what can you do? You should be able to write code that provides sensible default behavior, but checked exceptions make you work to just get that behavior, which is not how default behavior is supposed to work.


If you don't know how to handle an exception, just add it to your `throws` clause and let a caller handle it. That's the whole point of exception: letting you choose who is the most appropriate to handle the error.

Return value errors like Go implements forces everyone to care about all errors, at all times, even those they can't handle, which is why you see the pattern

    ok, err:= Foo()
    if err {
every ten lines in Go sources.


You (the programmer) still have to handle it. If I (as a person) don't know how to handle the exception, passing it up the call stack doesn't solve the problem, it just moves it to somewhere else. The problem is that I don't know where it should be handled, because it was exposing cryptic and obscure exception types that probably make sense to someone who really understands cryptography. I don't, so I just can't use that library.

If those were runtime exceptions, my program would just crash on error, which is the default I wanted. I didn't want to pollute my code with error handling stuff just to try out some simple functionality. If they were runtime exceptions, I could learn to use the code from the ground up, not by having to understand the whole thing at once.


> it just moves it to somewhere else.

That's exactly the point: not everyone along the method stack is forced to deal with it, only the one caller that knows how to handle it.

> The problem is that I don't know where it should be handled

Then don't handle it at all and let it crash the program. But at least you didn't add boiler plate simply bubbling up an Err at every level of the stack frames.

> If those were runtime exceptions, my program would just crash on error, which is the default I wanted

Sometimes it is, sometimes it isn't. Some exceptions should crash your program (runtime exceptions), others should be handled (checked exceptions). Languages that take correctness seriously should offer you both options.


I'm not talking about handling the error, I'm talking about boilerplate. In order to get the program to even crash on error, I had to handle 4 different kinds of exceptions. I'm as much a fan of expciciteness as anyone, but there's a difference between explicit and verbose. And there's a lot to be said for constructs that don't force you to indent your code every time you call a function.

My goal wasn't to write correct code, it was to test something out. To get comfortable with the library and the task. Checked exceptions get in the way of that.

>Languages that take correctness seriously should offer you both options.

Rust and Haskell take correctness seriously and offer neither. Checked exceptions make perfect sense in terms of ensuring correctness, but they are awful for usability and they are not the only way to achieve correctness.

>Some exceptions should crash your program (runtime exceptions), others should be handled (checked exceptions).

Shouldn't the user determine that, not the implementor? Why should any exceptions not be checked?


> Why should any exceptions not be checked?

Because there are exceptions you can't do anything about (e.g. OutOfMemoryException) and exceptions that you don't know what to do with (e.g. an NPE where you didn't expect it).

NPE is the poster child for an unchecked exception: if you know your code is throwing an NPE here, just fix it instead of catching the NPE.


> Checked exceptions don't let you get away with this kind of sloppy programming.

They sure do. Just write your Java code without a try/catch block anywhere, or just have your catch do absolutely nothing. You can do it, trust me. Exceptions don't stop programmers from doing anything.


Just write your Java code without a try/catch block anywhere

The point of checked exceptions is that you can’t do that. It is a compile-time error to not either catch the exception or explicitly indicate that you will propagate it.

Unfortunately, at least in Java, that style proved too onerous for a lot of programmers and motivated the catch-all, do-nothing wrapper idiom that is completely unhelpful as far as safety goes.

Exceptions don't stop programmers from doing anything.

At least in principle, you can statically detect any failures to handle possible exceptions if you have a suitable type system. Of course, if you just hack around those warnings, as we’ve seen Java programmers do with checked exceptions and catch-alls, then you’re no better off than if you ignored a relevant return code in the first place (aside, perhaps, from making it much more obvious to a static analyser or during a code review that you are doing so).


> > Just write your Java code without a try/catch block anywhere

> The point of checked exceptions is that you can’t do that.

Sure you can. You just have to write "throws Exception" after all your function signatures.


That would be a (perhaps rather foolish) variation of the “explicitly indicate that you will propagate it” case.


> Just write your Java code without a try/catch block anywhere

You can't. I don't think you understand how checked exceptions work.


This is not idiomatic Go code and in my experience not many people will do this. New Go programmers who ask for help with a piece of code that ignores the error are told immediately to check the error. And experienced Go programmers simply do not ignore the error.


You've never seen the Java "pattern" of

  try {
    foo();
  } catch (Exception e) {
    ;
  }

?


This is not idiomatic Java code and in my experience not many competent people will do this. New Java programmers who ask for help with a piece of code that ignores the error are told immediately to check the error. And experienced Java programmers simply do not ignore the error.

Also, you don't need the semicolon.


It's not idiomatic, or good, but neither is ignoring the return value from a go function call. Both languages fail to provide better affordances for error checking.


That it's bad style is kind of the point: The standard response from Go advocates is that ignoring errors is not idiomatic, but it's not good style anywhere, in any language where it isn't outright illegal.


I think we're in violent agreement. I wasn't saying that Go's affordances solve this problem, merely pointing out that checked exceptions in Java aren't any kind of a dispositive solution.


I don't think it is common, unless you work with pretty bad code base. You simply don't ignore errors like that.


Although a lot of people are pointing out (correctly) that it's trivially easy to ignore a checked exception, it does at least do two things:

1. Signal to the calling programmer that some error condition can occur. For example having to catch SomethingNotFound tells them that it's possible that Something might not be found.

2. You just put in your coding standards that you must do something sensible inside of a catch block, and get a code checker to break the build.

I don't think anybody is trying to say that checked exceptions have "solved" the problem of people ignoring error conditions. It does at least give you an easily visible thing to point at and say "that's wrong" though.


I've got a blog post on deck that basically says that just as Joel on Software proposed that the vast bulk of the advantage of scripting languages in the 2000s was garbage collection, I propose that the big advancement in error handling lately is simply the idea that it ought to be stuck in the programmer's face, and the exact manner in which it does so isn't really very important.

By that standard, exceptions actually fit in between C-style obliviousness and explicit error returns. They do allow a certain amount of implicitness (does this code have no try/catch exception handlers because the programmer is deliberately invoking the default exception behavior, or is it because they didn't think about errors at all?), but you still can't ignore errors the way you can in C. And then with explicit error returns, you can't ignore them at all without leaving a trail of your decision to do so right there in the source code.

Checked exceptions tried to straddle the boundary, but I think I have to agree with the general consensus that they are a failed experiment. One of my "cut through the noise" metrics for language design decisions is "do any subsequent languages pick up the feature?". If a language as dominant as Java has a feature, but after 10-15 years no new languages are picking up the feature, that Means Something.


I like how Swift does it. There are no real exceptions, but error object handling to jump to the end is fairly easy. It reduces boilerplate and increases readability.

  do {
    try error-return-statement
    statement
    statement
    try error-return-statement
  } catch ErrorType1 {
    ...
  } catch ErrorType2 {
    ...
  }
It looks like exceptions, but under the hood error objects are being returned by error-return-statements, thus the explicit try keywords before them. It retains go error object simplicity, but keeps things readable and tidy.


jerf, when you write that blog post, would you submit it on HN? I'd like to see it.


This is not a good example of how Go makes it easy to ignore errors (which it does in some cases, but not in that way). You had to acknowledge the dropped error syntactically.


A better example is the timeless:

    err := f1()
    if err != nil {..} // have to do this or compiler error

    err = f2()
    // oops, forgot to actually do anything about it!
Now, granted, `go vet` helps with this, but.. these kinds of things can be solved by the language proper in much better ways, and they should be type errors. Like rust's `Result` or Haskell's `Either`.

Edit: rust's result is especially nice with #[must_use]. This has saved my team from mistakes relatively frequently.


Even beyond that is:

    var foo *bar
    foo.Baz()
Accidentally dereferencing a null pointer in Go is dangerously easy


Actually that doesn't dereference a nil.

https://play.golang.org/p/cjmflMBhF7

Why not learn the language before criticizing it?


To be fair, we don't know if `Baz` is a pointer or value receiver in the parent example. So it could: https://play.golang.org/p/fYO8jBbVQ5


That's true, but it's pretty uncommon for a function to receive a value, and if so you either have a good reason for it or you're doing something wrong.


You created a playground that intentionally avoids the nil ptr dereference.

https://play.golang.org/p/Cu4sE829ZZ


You can actually create meaningful implementations of methods on nil pointers.

    if b == nil {
        fmt.Println(":)")
    } else {
        fmt.Println(b.emote)
    }
In this case, that's useless, but that's an artifact of the chosen example. I use it every so often in places where it happens to have meaning.

Go often treats nil as a legal value, which means that some of the things you might expect to crash don't, and when used idiomatically can sometimes make for shorter code. For instance, the "length" call on slices will happily take a nil and return 0, you can "append" to a nil slice and get a slice back, etc. Ultimately it's still a language with a null in it, though. There's no non-nil pointer type.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: