Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The world is not functional

What does this even mean? If the world is not functional, then what is it? Is the world procedural? And what world are we talking about? Our planet, physically? Are you then discounting the worlds of mathematics and logic? You don’t gain performance? What performance? Program execution speed? Development pace and time to market?

Compared to the procedural version? Is your view that procedural programming is inherently better — for any definition of the word — regardless of context? Would SQL queries be easier to write if we told the query planner what to do? Is the entire field of logic programming — and by extension expert systems and AI — just a big waste of time?

So many vague aphorisms which do nothing to further the debate. And Haskellers are the ones getting called “elitist”!



>> The world is not functional

> What does this even mean? If the world is not functional, then what is it?

The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events. Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations, but not with many real-world problems.

> And Haskellers are the ones getting called “elitist”!

Well, one may say that answering with questions could fit the bill...


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

True

> Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations

True

> but not with many real-world problems.

Debatable, but in any case a non-sequitur. Are you sure you're talking about functional languages as they're used in reality?

I once wrote a translator between an absurdly messy XML schema called FpML (Financial products Markup Language) and a heterogeneous collection of custom data types with all sorts of "special cases, corner cases, exceptions, holes, etc.". I wrote it in Haskell. It was a perfect fit.

https://en.wikipedia.org/wiki/FpML


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

Yes. All of which are modelled in Haskell in a pretty straightforward manner. I’d argue Haskell models adversity like this better than most languages.

> but not with many real-world problems.

Haskell is a general purpose language. People use it to solve real world problems every day. I do. My colleagues do. A huge amount of people do.

> Well, one may say that answering with questions could fit the bill...

I see. So trying to define the terms to enable constructive discourse is elitist. Got it.

If you want me to be more concrete and assertive, fine. No problem. Here we go.

You are wrong.


It's not vague, processors are still procedural. Network, disk, terminals, they all have side effects. Memory and disk are limited.

SQL queries are exactly one of those cases where functional expression of a problem outperforms the procedural expression, and that's why they're used where it matters.


Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical?

Because neither of those are true.

You’ve conceded that SQL queries are one case where a functional approach is more ergonomic (after first asserting that the world is not functional, whatever that means). Why aren’t there other cases? Are you sure there aren’t other cases? One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.


> Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical? Because neither of those are true.

No, I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided (Lisp is less opinionated than Haskell for one).

> One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.

They could argue, but they would be wrong.

SQL works because it's a strict abstraction on a very defined problem.

Functional is great when all you're thinking about is numbers or data structures.

But throw in some form of IO or even random numbers and you have to leave the magical functional world.

And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions. And can you work without a GC?


> I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided

Functional programming certainly doesn't work literally everywhere, but to say Haskell's design is "misguided" is your opinion, and it's one that some of the biggest names in the industry reject. How much experience do you have designing programming languages? Or even just building non-trivial systems in Haskell? Judging by the evident ignorance masked with strong opinions I'd say around about the square root of diddly-nothing.

> But throw in some form of IO or even random numbers and you have to leave the magical functional world.

Wrong. Functional programming handles IO and randomness just fine.

> And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions

Are you suggesting there aren't a lot of people working hard on GHC? Because if you are — and you seem to be — then again you would be wrong.


> processors are still procedural

Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine. Declarative programming is really unhelpful because it says nothing about the order in which code executes.

We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe). Electrons move one after another around circuits. Instructions in the CPU happen one after another according to the procedural machine code.

The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Haskell fanboys talk about immutable data structures like it's beneficial to go back in time and try again. But it's a bad fit for the CPU. The CPU would never run some code and then decide it wants to go back and have another go.


You're saying a lot of wrong things about CPUs. CPUs do execute instructions "out-of-order", and they do speculative execution and rollback and have another go. Branch prediction is an example.

All of this is possible only with careful bookkeeping of the microarchitectural state. I agree the CPU is a stateful object. But even at the lowest level of interface we have with the CPU, machine code, there are massive gains from moving away from a strict procedural execution to something slightly more declarative. The hardware looks at a window of, say, 10 instructions, deduces the intent, and executes the 10 instructions in a better order, which has the same effect. (And yes, it's hard for me to wrap my head around it, but there is a benefit from doing this dynamically at runtime in addition to whatever compile-time analysis.) In short, it is beneficial to go back and have another go.

This was demonstrated also in https://snr.stanford.edu/salsify/. If you encode a frame of video, but your network is all of the sudden to slow, you might desire to encode that frame at a lower quality. Because these codecs are extremely stateful (that's now temporal compression works), you have to be very careful about managing the state so for can "go back and have another go".

I am less confident about it, but what you say about the universe also seems wrong. What physical laws do you know take the form of specifying the next state in terms of the preceding state? And literally many of them are time-reversible.


Thanks. It was an attempt at parody but apparently I didn't lay it on thick enough. I'll try to up my false-statements-per-paragraph count next time.


What you wrote about CPUs many people believe, and many simpler CPUs operate like that. So it was difficult for me to detect as parody. Sorry that I missed it! It's funny in hindsight.

Not sure what the parody was in computing 2020 before 2019.


> Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine.

I believe it is because abstractions are the way we have always made progress. Is the C code that's so close to the machine not just an abstraction of the underlying assembly, which is an abstraction of the micro operations of your particular processor, which in turn is an abstraction of the gate operations? The abstractions allow us to offload a significant part of mental task. Imagine trying to write an HTML web page in C. Sure it's doable with a lot of effort, but is it as simple as writing it using abstractions such as the DOM?

> We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe).

You just proved why abstractions are useful. "One thing happens after another" is simply our abstraction of what actually happens, as demonstrated by e.g. the quantum eraser experiment [1][2].

[1] https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser

[2] https://www.youtube.com/watch?v=8ORLN_KwAgs


>I believe it is because abstractions are the way we have always made progress.

>The abstractions allow us to offload a significant part of mental task

Edge/corner cases in our abstractions is also how propagation of uncertainty[1] happens. You can't off-load error-correction [2]

[1] https://en.wikipedia.org/wiki/Propagation_of_uncertainty

[2] https://en.wikipedia.org/wiki/Quantum_error_correction


I'm not sure what you mean by "You can't off-load error-correction". In the case of classical computing, we do off-load error-correction (I don't have to worry about bit flips while typing this). In the case of quantum computing, if we couldn't offload error-correction, an algorithm such as Shor's wouldn't be able to written down without explicitly taking error-correction into account. Yet, it abstracts this away and simply assumes that the logical qubits it works with don't have any errors.


> Instructions in the CPU happen one after another

https://en.wikipedia.org/wiki/Superscalar_processor

> a CPU will never execute later instructions before earlier instructions

https://en.wikipedia.org/wiki/Out-of-order_execution

> The CPU would never run some code and then decide it wants to go back and have another go

https://en.wikipedia.org/wiki/Speculative_execution


Actually the FP fanboys, all the way back to those IBM mainframes where Lisp ran.

Even C abstracts the machine, the ISO C standard is written for an abstract machine, not high level Assembly like many think it is.

Abstracting the maching is wonderfull, it means that my application, if coded properly, can scale from a single CPU to a multicore CPU + coupled with GPGPU, distributed across a data cluster.

Have a look at Feynamm's Connection Machines.


> The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Except thanks to a compile-time optimization.


And ooo-execution that happens at _runtime_.


This 1000 times.

The reason SQL (really - relational algebra) works so well is precisely because relational data is strongly normalized [1].

But the data is only a model of reality, not reality itself. And when your immutable model of reality needs to be updated strong normalisation is a curse, not a blessing. The data structure is the code structure in a strongly-typed system[2]

Strong normalisation makes change very expensive.

[1] https://en.wikipedia.org/wiki/Normalization_property_(abstra...

[2] https://en.wikipedia.org/wiki/Code_as_data




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: