> In a functional world persistence (e.g memory) cannot be a thing.
Why not?
To give a functional Haskell flavoured example, I can use the Reader and Writer monads (which are functional) to create a whole bunch of operations which write to and read from a shared collection of data. That feels a lot like memory to me.
Indeed, the Reader monad is defined as:
> Computations which read values from a shared environment.
I just don't understand the whole "you can't have memory / order of operations / persistence / whatever else" as an argument against functional concepts when they have been implemented in functional ways decades ago. The modern Haskell implementation of the writer monad is an implementation of 1995 paper.
Edit: it looks like who I responded to doesn't actually want to have a reasonable discussion, but for anyone else reading along, it's entirely possible to have functional "state" or "memory" - what makes it functional is that the state / memory must be explicitly acknowledged.
Trying do dismiss functional computation in this way is essentially a no true Scotsman; functional computation is useless because it can't do X (X being memory or persistence or whatever), but when someone presents a functional computation that does do X, it's somehow not a "real" functional computation precisely because it does X. Redefining functional computation as "something that can't do X" doesn't help anyone, and doesn't actually help with discussing the pros and cons of functional programming since you're not actually discussing functional computation but some inaccurate designed-to-be-useless definition mislabeled as functional computation.
Interesting point. Lambda calc is equivalent, where the expression can be seen as equivalent to the ticker tape. So when a reduction occurs in lambda calculus, it's just like a group of symbols being interpreted and overwritten. But the thing is, no one has to overwrite the symbols to "reduce" them. It's just a space (memory) optimization. Usually to calculate something (you're fully correct by the way) we do overwrite symbols. The point of using immutable style is simply to make sure that references don't become overwritten. It sucks to have x+2=8 and x+2=-35.4 within the same scope, right? Especially when the code that overwrote x was in a separate file in an edge case condition you didn't consider.
The case is convincing, but it requires you to reject your own, human, faculties.
Makes the use/expression of language rather awkward when you can't remember any words.