Neat. How did you do that? (Seems like something a language ought to forbid, anyway).
edit: ok, from what I've deciphered from your Ruby code, it's nothing special. You've declared a global boolean and every invocation of function "p" flips the value, so it's only a parlor trick and nothing to do with the expressivity of the language. Any language with side-effects and mutable globals will achieve this trick in the same manner. It's a trick because your first "p" and your second "p" aren't the same "p", so there's no real violation of the "law of non-contradiction".
What about the rest of my comment? The expression problem? That getters/setters are an anti-pattern in OOP languages (just ask Alan Kay...)?
> Seems like something a language ought to forbid, anyway
> What about the rest of my comment? The expression problem?
Navigating around the expression problem is precisely what I am demonstrating. A language ought not [1] decide for me what I am allowed to express or not. I have a healthy disregard for linguistic purism [2].
Expressive power is the ability to say whatever I want to, whenever I want to. That is precisely what a para-consistent logic buys me - expressivity [3].
This is counter-intuitive to most people who have been taught to value consistency above all else (mathematicians, logicians). Non-contradiction is an axiom, not a law - it's a false authority. A man-made deity.
> it's nothing special
I didn't claim it's anything special - I merely claimed that I can say it. I did say it.
If "Haskell can say everything Ruby can" then go ahead and say it in Haskell. I am not saying it's impossible - but I a curious how you might navigate the Turing tarpit [4].
>It's a trick because your first "p" and your second "p" aren't the same "p"
First p and second p? You mean p@time(1) and p@time(2)?
Are 'you' not the same 'you' as the 'you' from 1 second ago, despite the ongoing changes in your body?
Ironically, that is the disconnect between pure functions and reality. Side-effects are the norm, not the exception.
>That getters/setters are an anti-pattern in OOP languages (just ask Alan Kay...)
That's an appeal to authority. Alan Kay doesn't get to make decisions for me. He gets to voice his opinion - I get to evaluate the pros and cons. The choice is always mine.
Like I said - In my problem-space lack of choice/control is an anti-pattern. Humans get the final say - not algorithms.
You're being disingenous. You are not really breaking the law of non-contradiction. At first I thought you at least had redefined the "not" or "and" operators, but you merely wrote a function which alternates between True and False with each invocation, using global state. You've written "true ^ !false == true", which of course violates no law. Hence: trickery. You're not expressing anything novel.
This can be done trivially in Haskell making the global param explicit, and I'm sure with some trickery like the State monad you can even hide this. Maybe the syntax will be less neat, but this is a good thing: "hiding" things like side effects is bad.
The expression problem relates to your assertion about adding getters/setters, and judging by your reply it seems you didn't understand this...
So you're dismissing Alan Kay with no good reason. Furthermore, getters/setters have nothing to do with either extensibility or expressiveness. Why should I pay attention to your opinion?
Then implement your operators. Then everything is what it is defined to be. How can this be worse than sneakingly changing semantics of already defined things?
In a free world it should be legal to sell copper as gold, call the lies truth, and mix newborns in hospital - and people are just stupid if they want laws against these... ;)
Strawman. You are conflating denotational semantics [1] with operational semantics [2].
In the free world, a pilot is free to recognize that the 'already-defined autopilot' is doing something stupid/dangerous (despite all the green lights) and is able to take control of the system at run-time - he doesn't have the luxury of fixing this bug at compile-time.
In a free world people are allowed to "sneakily change their minds" when they see an obvious denotational error.
Beware of bugs in the above code; I have only proved it correct, not tried it. --Donald Knuth
Type safety is not the same thing as system safety. The latter cannot be formalized - that is why humans are in charge. Not algorithms.
Making laws against airplanes crashing is not the same thing as stopping airplanes from crashing.
Here is a relevant extract from this paper[3]:
"All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure. "
> Strawman. You are conflating denotational semantics with operational semantics
But your trick is neither. The law of non-contradiction refers to the same proposition P in both positions, simultaneously:
> "To express the fact that the law is tenseless and to avoid equivocation, sometimes the law is amended to say "contradictory propositions cannot both be true 'at the same time and in the same sense'" [0]
Yours isn't "at the same time" because each evaluation of P (which in your case is a programming function with side effects, neither a proposition nor a mathematical function) depends on the other evaluation to cause a side effect.
So you're cheating, you haven't expressed anything innovative or outside the realm of other languages (including Haskell), and you haven't broken the law of non-contradiction.
>The law of non-contradiction refers to the same proposition P in both positions, simultaneously
You are describing quantum superposition, but then you are straddling the classical and quantum paradigms when interpreting the LNC.
How many instructions/CPU cycles does it take to evaluate p ∧ ¬p? More than 1? Then what do you even mean by "simultaneously"?
>which in your case is a programming function with side effects, neither a proposition nor a mathematical function
You are making my argument for me. The LNC is an axiom of Mathematics/Logic. It's not a law as in a law of physics.
And so if a physical system happens to violate it - it's hardly a big deal.
The real world actually contains things which are in two (or more) states simultaneously. We call them cubits and we use them to represent uncertainty.
What I have shown is a mutating getter. It's not supposed to be novel or original (I don't know why you are measuring me up to such ludicrous ideals). I am simply demonstrating to you a real-world scenario in which interacting with a system alters its state (deterministically or otherwise) which allows me to do the "impossible": evaluate p ∧ ¬p as True. It's just a race condition.
Side-effects are the norm, not the exception throughout the universe in general. Your classical computer would have no CPU registers, no cache, no memory, no persistence without side effects.
If the Mathematical religion frowns upon side-effects, why should I care about it when it clearly doesn't correspond to the universe I live in?
And if you are upset about my "trickery" (oh no! Physics is cheating!) here is an implementation that doesn't mutate global state.
> What I have shown is a mutating getter. It's not supposed to be novel or original
A mutating getter doesn't violate the laws of non-contradiction, because a getter is not a proposition. In fact, if you use a getter at all, you're outside non-contradiction. The rest of your post is ridiculous, but feel free to keep equivocating this with quantum physics or cubits or whatever, instead of acknowledging that rather than breaking the law of non-contradiction, your real assertion is "my language can mutate global variables behind the scenes". Much less impressive, right?
Also, you keep avoiding to mention what your "problem space" is. I suspect this is because your "problem space" is rather pedestrian and doesn't require anything you claim it does.
Not upset, by the way. That's chicanery on your part. Usually done by people who know they are losing the argument ;)
> A mutating getter doesn't violate the laws of non-contradiction, because a getter is not a proposition. In fact, if you use a getter at all, you're outside non-contradiction.
Q.E.D you are admitting that non-contradiction it's an axiom of language and not an actual law. As in law of physics.
I don't expect you to answer my question re: the number of CPU cycles you consider as being a "simultaneous" evaluation of p ∧ ¬p least you had to admit the LNC is incoherent ;)
> That's chicanery on your part. Usually done by people who know they are losing the argument ;)
Isn't this what somebody who knows they are losing the argument would say ? ;) But if you are in it to win arguments - have a noddie badge. You win The Internet.
My problem-space is the CAP theorem. Global-scale distributed systems. Where most of the complexity is 'side effects'. But we call it network IO/communication/persistence/API calls etc.
Haskell would add zero value, because the bugs are rarely about type errors, and predominantly about Byzantine failures.
> Q.E.D you are admitting that non-contradiction it's an axiom of language and not an actual law. As in law of physics.
But it was you who introduced the law of non-contradiction in this conversation, then wrote a program that had nothing to do with it, and when called out you started with this nonsense about physics, cubits and what it means for "you" to be "you" and other unrelated topics. Either your program is about the LNC or it isn't, and if it isn't (as it turned out to be the case) why bring it up in the first place.
The CPU thing is nonsense and I don't see what I have to answer about it. In contrast, the Expression Problem is directly related to your initial assertion about extensibility, but instead you talked about something else. Getters/setters aren't about extensibility, by the way.
Finally, Haskell excels at IO, network, API calls, etc. People write distributed sysyems in Haskell -- why on earth would you think otherwise?
> It's so pedestrian, it's probably your home page.
I don't understand this. It reads like an attempt at trolling but I don't understand what it means. In which sense should I be offended? Google is my home page, does that help you somewhat?
I did do that. With the intent to determine whether the discussion is about denotational semantics (theory) or operational semantics (practice).
In theory there is no difference between theory and practice, but in practice there is...
>Either your program is about the LNC or it isn't
>The CPU thing is nonsense and I don't see what I have to answer about it
Until you decide on the 'CPU thing', I don't see how you can possibly decide whether ANY program is about the LNC or not.
Denotationally - it clearly is about the LNC as defined. It expresses precisely "P ∧ ¬P".
Operationally - it is about the LNC IF and only IF "P ∧ ¬P" can be evaluated simultaneously.
If your notion of "simultaneously" doesn't mean "evaluated in a single CPU cycle/instruction" then explain what you mean?
The leaky abstraction of CPU time is somewhat inconvenient to functional purists. As is the fact that strongly-typed Haskell reduces to untyped machine code.
Somewhere between typed and untyped Lambda calculus lies the choice of "right tool for the job".
>Getters/setters aren't about extensibility, by the way.
Would this be a good time to call out the fact that all you have been doing is Socratizing? Saying that X is not Y, says nothing about Y.
You asked me to give you an example, and trivially extending an object-model to support new fields in unstructured (read: untyped) data is not good enough for you.
If you think 'extensibility' has some universal/objective /theoretical meaning devoid of practical context, then go ahead and tell us what that meaning is.
> Finally, Haskell excels at....
How are you measuring that and against what objective standard for 'excellence' (or lack thereof)?
>People write distributed sysyems in Haskell.
People write (and have written) distributed systems in all sorts of languages.
In what measurable way is Haskell better?
>Google is my home page, does that help you somewhat?
Only in so far as Google doesn't use Haskell.
If Haskell offers value at that scale/complexity, then I imagine "building a better Google with Haskell" is a great idea for a startup?
>Haskell can say everything Ruby can
Go ahead and say P ∧ ¬P ⇔ True in Haskell.
Here is me saying it in Ruby: https://repl.it/repls/AccurateJauntyDimension