Ah but the thing is that the payoff is simply assumed. Nobody has actually shown this claim to be true. You spend more effort and you hope that there is a payoff. Yet you have things like Cabal that's as flaky as anything written in PHP. Clearly static typing isn't some magic pixie dust that all of a sudden makes all your code work correctly.
> Clearly static typing isn't some magic pixie dust that all of a sudden makes all your code work correctly.
Static typing itself - no. What is important about Haskell is the equational reasoning (algebraic thinking, if you will) you can do, not static typing. Static typing is just a vehicle to be able to pull it off.
There is a payoff already - see my comment to Lisp guy above. We could also talk about how monads made LINQ great, or how streams in Java have to be more complicated than necessary because the language wasn't very well formally defined.
And there will be bigger payoff in the future, when we will learn for example how to type a whole big software component correctly (like a web service). So far, it's an art driven by engineering intuition; but formalization will help to make more robust systems.
The payoff is also assumed based on experience with other branches of mathematics (as I was alluding in my first comment), where historically, the formalization always paid off. It doesn't mean you cannot get things right informally, but it's usually fiendishly difficult.
Again, you're making a lot of assertions in absence of any evidence to support them. One would think that this would be easily demonstrable seeing how long Haskell has been around, and yet no such evidence exists to my knowledge.
I think there is plenty of evidence (of how types help to design better APIs), you just don't want to accept it. I already mentioned LINQ and lens, another could be parser combinators and FRP (Observables in particular).