> that data flowing between them consists of simple, agreed-upon formats
> HTTP servers and clients, email systems, DNS resolvers—they all interoperate based on simple protocols
Yeah, massive disagree with this. There is absolutely nothing "simple" about these protocols. Try to compose 3/4 UNIX programs together and you necessarily need to run it to see if it even makes sense. Don't get me started on HTTP.
It boils my blood to read "simple" in the context of software engineering
How do you deal with persistent sessions in VSCode? I have a remote VM to which I connect through SSH using VSCode, but I need to have certain programs running even after I disconnect from the VM. It's the only reason why I use tmux and I haven't been able to get rid of it.
Thanks! I didn't actually build it myself but a professor and other student were also involved. When I joined the project the decision was already made: the professor wanted to use APL and we were using an old CPU architecture book as reference that used APL (Gerritt A. Blaauw, Frederick P. Brooks Jr. Frend, Computer Architecture: Concepts and Evolution 1st Edition).
We almost got the PDP-11 working albeit some extensions like floating point arithmetic.
>GNU is free but pretty much abandoned. Support for Windows was (is?) nonexistent.
GNU APL was never abandoned, dev just went a long time without doing a proper release. I believe the Windows issue is just with cygwin, never looked into it since I don't use Windows.
a) “professional” writ large often encompasses academic research, even though it’s also often specifically used in contrast. Language is weird like that. Former and current academics like to have a term for all the stuff we’ve been paid to do, and we usually default to “professional experience” (though current academics do usually say “research experience” because you’re right that professional has the default connotation of corporate employment).
b) the project seems to have been over 2000 lines of APL. That’s not the “small” student project you’re implying, in any language, and in APL that’s quite substantial.
So you go through my profile and assume that you know my professional background? What makes you think that that's the only project where I've used APL?
I don't think he went through your profile, suspect he just remembered the discussion he had with you in the linked thread. But how about we end this? it is not productive or interesting.
I remembered it, and it seems like a reasonable assumption that your first experience of APL being so bad would put you off doing any more in the future.
My favorite language used to interpret my most hated language (used both professionally).
There are several things I disagree with regarding Haskell but it's understandable given that this is OP's first time using the language (like a "monad's internal state"), but I want to highlight one particular observation:
> This uncertainty of time of evaluation also makes catching errors difficult, because calling catch on the function that throws the error will not necessarily catch that error
It's important to distinguish between imprecise exceptions (ex. calls to `error `, `undefined`, and the like) and synchronous exceptions (async exceptions are not important for the article).
> Catch must be called on the function that forces evaluation on that error. This is something that is hard to trace, and something that types don’t help much with.
The types are actually the most important part here! Synchronous exceptions cannot be thrown by pure code (as long as we're not dealing with escape hatches like `unsafePerformIO`), while IO code can throw and catch all kind of exceptions .
Regarding catch, yes, I agree types help, but they can help even more! I suggest an IO-wrapper effect system (mine is called Bluefin; effectful is also a good choice). Then there is absolutely no ambiguity about where an exception can be handled. There is exactly one place -- no more, no less. It makes dealing with exceptions very easy.
No, algebraic effects are a generalization that support more cases than LISP's condition system since continuations are multi-shot. The closest thing is `call/cc` from Scheme.
Sometimes making these parallelism hurts more than not having them in the first place
Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.
> Go also has somewhat better control over data layout
How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.
> This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.
C# has been consistently moving into that direction by taking more and more inspiration from F#.
The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.
That's sort of the problem with C#. It couples the type (struct vs class) with allocation. C# started life by copying 1990's Java "everything-is-a-reference". So it's in a weird place where things were bolted on later to give more control but still needs to support the all-objects-are-refs style. C# is just not ergonomic if you need to care about data layout in memory.
Go uses a C-like model. Everything is a value type. Real pointers are in the language. Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area. That function can be used for all 3 types, no fuss.
> It couples the type (struct vs class) with allocation
Agree. Where things are allocated is a consumer decision.
> C# is just not ergonomic if you need to care about data layout in memory
I disagree. I work on a public high performance C# code and I don't usually face issues when dealing with memory allocations and data layout. You can perfectly use structs everywhere (value types) and pass references when needed (`ref`).
> Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area.
You can do this perfectly fine in C#, it might not be what some folks consider "idiomatic OOP" but I could not care less about them.
Chances are it was just personal preference of the team and decades of arguing about language design have worn out Anders Hejlsberg. I don't think structural typing alone is enough of an argument to justify the choice over Rust. Maybe the TS team thought choosing Go would have better optics. Well, they won't have it both ways because clearly this decision in my opinion is short-sighted and as someone aptly pointed on twitter they will be now beholden to Google's control over Go should they ever need compiler to support a new platform or evolve in a particular way. Something they would've gotten easily with .NET.
On the topic of preference, this thread has really shown me that there is a HUGE preference for a native-aot gc language that is _not_ Go. People want AOT because of the startup and memory characteristics, but do not want to sacrifice language ergonomics. C# could fill that gap if Microsoft would push it there.
Doubt is human, but it isn’t always warranted. In C++ can use a concurrent, completely pause‐free garbage collector, where the programmer decides which data is managed by the GC. This enables code optimizations in ways that aren’t possible in C# and Java.
You realize that is literally not the same thing? I said equivalent code. The whole reason of using a managed language with GC is to not think about those things because they eat up thought and development time. Of course the language that will let you hand optimize every little line will eventually be more performant. I really think you’re discounting both C#’s ability to do those things and just how good Java’s GCs are. Anyway, thats not the point.
The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.
Where performance is paramount, developer convenience takes a backseat. Moreover, C++ has evolved significantly in recent years and is now quite enjoyable to use. We’re also discussing a tool in this thread whose performance is critical for developers. Over-simplifying code will ultimately lead to programmers using such solutions being replaced by AI, while the software itself will demand enormous computational power. That’s not the way forward.
We’re talking about a tool whose performance profile with a managed language is perfectly acceptable as deemed by the choice to use Go. Let alone the fact that this thread you’ve been replying in has never been about achieving the utmost performance.
You’re absolutely delusional if you think C++ is enjoyable compared to any managed language or if you think AI is capable of replacing anything.
You’ve moved this conversation extremely far off topic and I won’t be replying again.
Cheers dude. Good luck with your chat bots and CVE’s from your raw pointers.
I assume that the original performance profile of these tools was satisfactory to their creators, yet they still decided to rewrite them. I admire programmers who claim that their tools don't need to be maximally optimized. This is likely an attempt to justify the fact that their products aren't exceptionally performant either. Just take a look at the TIOBE rankings, and you'll see how many programmers hold a different view than you.
> HTTP servers and clients, email systems, DNS resolvers—they all interoperate based on simple protocols
Yeah, massive disagree with this. There is absolutely nothing "simple" about these protocols. Try to compose 3/4 UNIX programs together and you necessarily need to run it to see if it even makes sense. Don't get me started on HTTP.
It boils my blood to read "simple" in the context of software engineering