If you want to dabble in functional, C# is actually a compelling option now. Switch expressions and LINQ can take you a long way.
I strongly believe that functional programming is not a good fit for 100% of software architecture. The best is some sort of hybrid. Generally, the closer you get to the business logic, the more functional you would want to be. The serializers, http servers, etc. are probably not worth the squeeze to force into a functional domain.
I agree that 100% functional is not necessarily the way to go. That said, I am a big fan of F#'s "functional first" ethos, which drives you toward that sweet spot of a functional core inside of an imperative shell.
With C#, when I'm trying to use functional design, I often feel like I'm swimming against the current. The language has functional features, and it will certainly let you use them, but, LINQ aside, the path of least resistance is mostly imperative and object-oriented.
In F#, it's the other way around. It has a full suite of object-oriented features - still, after all these years, quite a bit more complete than C#'s suite of functional features - but the path of least resistance is mostly functional.
I don't want to say that's a universal best way to have things. But it suits my taste, because it makes the easiest way to do things correspond very closely with the way I like to see things done: distinct and well-distinguished layers of functional and object-oriented code, sorted according to where each is of the most utility. C#, by contrast, tends to guide you toward a foamy lather of functional and object-oriented code with no identifiable organizing principle.
The creator of LINQ, Erik Meijer, wrote a fun article "The curse of the excluded middle: Mostly functional programming does not work". Yes it's tongue-in-cheek and highly provocative, but many of his points are true. There is a lot more yet to be learned from functional programming than LINQ and switch expressions.
This is such a weird article. All of his examples are insane strawman non-idomatic C# code, and then he complains when they don't do whatever he decides "an average programmer" would expect them to do. Huh? They were all either doing exactly what I expected them to do (silly things), or were so strange and alien that even Visual Studio has no idea what to do with them! I've never seen anything like that Cell<T> example and it doesn't come close to compiling. What a weird, weird thing to do: write code that literally does not compile and then complain about it!?
Is it meant to be tongue and cheek? I personally didn’t get that impression. I assumed he was using the non-religious meaning of “fundamentalist”: strict and literal adherence to a set of principles.
> The serializers, http servers, etc. are probably not worth the squeeze to force into a functional domain.
I used to think the same, but having now tried FP-style libraries for serializers (Thoth.Json) and HTTP servers (Suave), I think they are far superior to imperative or OOP alternatives. The ability to design these things in a declarative style makes the code much more robust and easier to read.
FP is better on a 30 year timeframe but on a next-quarter timeframe probably still has not "crossed the chasm" – you need not just something accessible like ZIO but you need to figure out what it's killer app is, which the FP community currently doesn't have a clue. "HTTP server but more declarative and made with monads" is not something people cannot live without
I initially believed this, but I have come to realize that pure functional programming is transformative for both business logic and infrastructure. Its a lot easier to get started in the business logic side; dealing with effects in the context of pure functions is a lot to figure out, I don't even know if you really can do it in not-haskell. But, via Haskell I have been able to achieve far greater levels of producitivty in these tasks than ever before.
The main downside is that its a lot to learn, and it takes a good while before you get productive. But I am absolutely more productive now in Haskell than just about anything else, save perhaps ad-hoc text munging task; the shell DSL is just so handy for random one off stuff.
I think serializers would be better off as railways, allowing you to combine small serializers while also collecting the errors so you end up with a conclusive outcome. Haskell's parser combinators are quite fun for example.
Also I've only used node/elixir serverside, but aren't http servers just a huge pipeline? It fits quite well into functional programming.
Absolutely. At a certain level of abstraction they can be viewed in this way.
How do HTTP servers actually get packets to and from the machine? At some point you have to interact with the operating system. This is not a realm where functional programming is very feasible today.
> The serializers, http servers, etc. are probably not worth the squeeze to force into a functional domain.
These are two examples I would probably come up with if you asked me “where does FP thoroughly beat imperative?”
(De)Serializers - parser combinator libraries like attoparsec absolutely BTFO imperative parser libraries in most dimensions. Bidirectional codec libraries like haskell’s Binary, Serial, Aeson[1] are top-tier. Functional formatters like Blaze are top-tier as well.
Haskell’s HTTP ecosystem is massively better than anything else I’ve used, and I went on a binge of trying a ton of HTTP servers like 6 years ago (in python, ruby, C++, Go, and finally Haskell).
I would have to respectfully disagree with you on the serializers part here. For me, after decades of fighting with “magic” stuff, simplicity is the key feature. So, everything that is a simple map function in disguise I tend to stick with functional languages and approaches.
Functional programming is not just programming with closures. It's about "referential transparency". Referential transparency permits higher-level mathematical reasoning, which leads to safer/better program composition, optimisations, caching/reuse and parallelisation/concurrency. If you are willing to concede mutable-state and other uncontrolled effects, there is much to be gained.
You might mean "enforced referential transparency" by having immutable variables and no I/O. Otherwise most languages allow you to have pure functions and nothing forces you to mutate state.
I really wish they put more priority on bringing discriminated unions into C#.
Seems like the most significant thing they could do.
I've spent so much time looking into using languages like Rust, Haskell, F# pretty much just because I want a compiled language with DUs. But always lose interest due to more practical requirements like tooling and package ecosystem.
If C# had them, I could stop looking. I would basically use it for pretty much everything except web stuff. And that's coming from someone who used to be very anti-MS. To me C# does everything I want in a language, except for lacking native simple DUs.
There's lots of cool things I've learnt about in FP languages. But discriminated unions are really the only feature that I feel like I'm really missing out on in other languages.
> I strongly believe that functional programming is not a good fit for 100% of software architecture. The best is some sort of hybrid.
And yeah I agree with you. FP is awesome at many things, but not everything. I went down the FP rabbit-hole, it was a and I had many revelations. But once the excitement wears off, I realised that I still like using classes for some things that are just inherently mutable... e.g. GUIs, and progress bars, SQL transactions etc.
Not to mention that being able to type `someObject.` and then get autocomplete for all its methods is a huge time saver, and something that is quite a lot more painful with pure FP where you're much more reliant on your memory and looking things up to find all the functions for something.
I think Rust did really well in balancing the two. I just want Rust's simple struct instantiation and enums in C#, and I could stop wasting time looking for the mythical "perfect language".
You may be interested in (for example) the approach taken with Halogen, the PureScript web framework. Every widget on the screen has its own state and actions, widgets send each other messages, etc. It ends up being very "object oriented" in a way that you're suggesting, but it also is very principled, i.e. components send each other messages, do not have access to each others internal states, etc. This gives you a really good way to separate concerns, while having each "widget" have its own state/not having to have a single global state a la elm.
> I strongly believe that functional programming is not a good fit for 100% of software architecture. The best is some sort of hybrid.
Haskell would be a better language in every domain where C# is applicable. That is from a language perspective, abundance of libraries is a different question, as it's a matter of the community size and attention.
I think enforced purity is problematic (although obviously useful for some purposes such as program correctness proofs). I still want to be able to write:
doThing1();
// thing1 has finished
doThing2();
Where the functions block. Async and Monads and Futures and all are useful for some things, but mostly I just want to do stuff sequentially, blocking, and not confuse myself.
Scala allows this, although the widespread Future-ification of libraries makes it hard to actually practice.
Idk i mean simply knowing what the code for >> looks like doesn't mean you're used to thinking in that way.
There was a time years ago when someone was showing me the types of (>>) and (>>=) in Haskell and my question was "ok, but what does that have to do with IO?!?"
I strongly believe that functional programming is not a good fit for 100% of software architecture. The best is some sort of hybrid. Generally, the closer you get to the business logic, the more functional you would want to be. The serializers, http servers, etc. are probably not worth the squeeze to force into a functional domain.