Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you figure? If it’s mission critical, it means failure is unacceptable. Python and other dynamic languages (TS is a middle ground here) excel when failure isn’t so bad and making and deploying changes is fast/easy/cheap. This is exactly the wrong tradeoff for mission critical systems.


>How do you figure? If it’s mission critical, it means failure is unacceptable. Python and other dynamic languages (TS is a middle ground here) excel when failure isn’t so bad

In practice I find that static typing as done by, say, Java and C#, excels at uncovering very shallow bugs. Its benefit is the most obvious when you do very little automated testing. The tech industry's dirty little secret is that automated testing is done infrequently and badly, so it looks pretty effective.

I find that if you try to compare the cost of building sophisticated tests in a high productivity language (like python) with the cost of writing code in a low productivity/high type safety language like haskell, there's a pretty blurry cost/benefit trade off. If you take out productivity as a factor, haskell looks much better.

Nonetheless, the idea that static types (and, to a lesser extent, formal proofs) are some sort of silver bullet persists. There is no silver bullet.


For sure static types are no silver bullet; no one in this thread is claiming otherwise and anyone who is making that claim is wrong. And of course various languages like Haskell and Java and C# have a productivity toll (although I’ve heard the latter two have improved considerably since I last used them), but I think almost none of that is due to static types intrinsically. Haskell’s productivity toll is due largely to its insistence on functional purity, bizarre syntax, inaccessible and jargon-laden culture and documentation, and general lack of attention to practical concerns. Java and C# suffer from an abundance of boilerplate, a large feature matrix, a bunch of baggage from an era where inheritance was shoehorned into everything, and similar “enterprise” cultural baggage (“IAbstractBeanFactory”).

A good counterpoint is Go. Despite being far more familiar with Python (it’s my professional/work language), it’s static types actually help me write correct code more quickly than I can with Python. Your mileage may vary, but I think anyone who has given Go an honest shake will find that it’s at least in the same productivity ballpark as Python. Note that this isn’t considering tooling or deployment, or performance where Go specifically excels over Python (and also often due to static types).

Of course, with sufficient investment in testing, you could get the same confidence with Python that you get in static languages, but writing tests is a productivity cost as much as boilerplate or a pedantic compiler. And with static languages, I often find that I can write fewer tests for the same confidence (in fact, I often prototype in Go and backport to Python).


Static types do help eliminate errors and unwanted behavior without requiring to write explicit tests to verify the same outcomes. A good static type system helps reason about the code and therefore increases productivity.

On that note, why would you consider Haskell a low productivity language? People who are well versed in it seem to find it an exceptionally productive language to work with.


Types can help eliminate errors and unwanted behavior, yes. Just like tests.

Types also help document code. Just like tests.

The point is that both are investments and both have different payoff matrices. Sophisticated are often better at preventing obscure logical bugs. They're also good at uncovering obscure not-bugs and preventing code from getting out until the compiler is satisfied. Bad unit tests also do this.

Haskell simply takes longer to write than other languages. Given two developers of equal skill and experience, anyway. I partly attribute the relative paucity of haskell software out there to this.


To be clear, there are many other statically types languages besides Haskell, most of which are more amenable to a faster pace of development.

Further, while I agree with your “different matrixes of payout”, I think static types are a very low investment and they have a respectable payout in terms of preventing bugs but also in terms of documenting code and facilitating tooling (such as autocomplete or documentation generation) and they also permit easier code changes than comprehensive unit testing (even “good” unit testing). Of course, I’m not advocating that static types completely obviate tests; rather that they obviate some of the tests; however, there is no clear answer as to how many or which tests are obviated; it’s very circumstantial.


From the paragraph following the "no silver bullet" paragraph:

> Skepticism is not pessimism, however. Although we see no startling breakthroughs, and indeed, believe such to be inconsistent with the nature of software, many encouraging innovations are under way. A disciplined, consistent effort to develop, propagate, and exploit them should indeed yield an order-of-magnitude improvement. There is no royal road, but there is a road.


I thought the emerging story was one of integrated testing and style conformity for the entire team. Who is working at a big software company for which this is _not_ true?


Millions?


Research doesn't support the claim that there is a strong correlation between static typing and fewer bugs. Is your opinion based on your gut feeling, or do you have data on the matter that you can provide?

https://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf


The type of bug (and related failure modes) is important though. Would you willingly board a plane if the avionics software and firmware had been written in a dynamically typed language?

Alternatively, usage of Rust may not be correlated with fewer bugs overall but I'd expect there to be fewer of the use-after-free variety.


I would prefer it to be written in something like Ada or Haskell, but I'd rather board a plane with avionics software written in Python than in Java.

I actually heard Aribus software is written in C++. If true it should scare the shit out of anyone.


As someone who enjoys Python quite a bit, I think Java has a lot of options that would be really important there that Python is lacking, like real-time support, and Python no real benefit.


Space shuttle software is written in C, and lots of critical software is written in C++. Discipline, standards, and experience go a long way. Static verification is also nice.


I am seeing Ada poping up a lot more lately, is Ada still being used?


The mere fact that something is "research" does not mean that it is credible. The study you reference in particular has a lot of issues: https://buttondown.email/hillelwayne/archive/science-turf-wa...


Of course, but research is still the best we have when it comes to support a claim or try to establish certain truths about the world.

If you don't believe the results of the study, please provide the counter evidence.


Measuring programmer productivity is very hard using standard research tools so they are not necessarily the best we have. This is not physics.

Besides, all I wanted to do was to point out that you shouldn’t derive any conclusions from that study. I don’t need any counter evidence for that (apart from the evidence that the study is flawed of course).


I would say experience is the best we have.


Research is actually just all over the place (lots of papers come to many, many different conclusions), because it's impossible to control for various things like developer experience.

Instead, common sense is actually the best substitute we have.


Its based on experience and a general consensus among the most experienced developers I know. The research isn’t remotely conclusive; it’s not worth anything until they control for much, much more.


Well, from what I’ve seen, if developers can’t get a dynamic language then they’re highly prone to cheat: e.g. not validating on IO and using things like the c# dynamic type everywhere.

Point is, code culture matters much more than language.


My approach is to dial up strictness gradually as code proves its value. I'll start out building a project and not validating on I/O, but as the requirements get locked down and the code has proven itself, I'll clean up all the edge cases - which will often mean adding in progressively stricter validation on border code.

The advantage of this is that if you end up not wasting too much time "building the wrong thing". Let's say that you took one form of I/O and built massively strict validation in and then realized later that you should have taken an entirely different form of I/O for your subsystem. All that time building in validation on that useless part of code was a pointless waste.

I don't have any stats, but my gut feel is that on average 40% of code can end up being tossed in this way (in some projects it's 100% =).

Prototyping speed is, additionally, not just useful in reducing the cost of building the right kind of code, it's useful in reducing the cost of building the right kind of test (a really underappreciated facet of building mission critical systems).

In my younger years I used to believe that for mission critical systems "building the wrong thing" was somehow less of a problem in code because you could fix requirements and do architecture upfront with some sort of genius architect. Turns out this was wrong.


> using things like the c# dynamic type everywhere

I'm curious as to where you have observed this, because my experience has been exactly the opposite: even in circumstances where "dynamic" might lead to more readable code, C# developers are loathe to use it, to the point where it's very hard to find it in idiomatic C# code.


Why do dynamic languages excel when failure is not so bad? What makes non-dynamic (compiled?) languages better when failure is bad?

I'm parsing your comment to mean, "it's easier to write correct code in compiled languages", but this is not obvious to me, or anyone who's, for instance, written any C at all.


C is the wrong comparison to make here - it's extremely low level. Java or perhaps C++ would be much closer.

* Ahead of time compiled languages reduce flexibility but allow the compiler to do more reasoning about the system.

* Statically typed languages allow the compiler to reason about the assignments you make and methods you call. If the language is also AOT compiled, you avoid crashing at runtime.

* Dynamically typed languages significantly reduce boilerplate and avoid the mental effort of expressing an idea in a rigorously typed manner. It's the "hold my beer" approach.


C is a good example precisely because it highlights the fact that explicit typecasts and memory safety are a LOT better at preventing bugs than static types.


> Why do dynamic languages excel when failure is not so bad?

There's a usual confusion of banking in this case, but if we're talking about popular dynamic/memory-managed languages, you can put an equivalent of:

    try:
      actual code
    except:
      log.exception
      
At a high level and be fairly sure things will be ok in a long run even with failures along the way.

On the other hand, most static-typed systems handle failures explicitly.

I think C is a bad example here. We've come a long way since C. We know how to do better.

Unless you want to include the ecosystem as well. C + valgrind + PVS + clang-analyser do make it easier to create correct code.


Wait but you can do that pattern in any language (java or cpp, both of which are static, and one which is unmanaged).


You can't really do it in c++ in the same way. If you mess up you will crash on invalid memory reference. With Java, you'll catch a bad cast or a null ref exception instead.


I think there is a spectrum here between mission-critical systems (utilities, telcos, ATC, airline bookings systems, embedded) through to scripting. Perhaps I shouldn't have used the term "mission-critical" here, rather "big, complex systems whose failure costs money". Python is used extensively in certain disciplines(1), such as data science and data engineering, and there are plenty of systems that meet these criteria. Strong typing would wipe out an entire class of errors.

(1) Due to ecosystem richness, the need to collaborate cross-functionally, and other reasons.

EDIT: grammar gremlins.


"Strong typing" is a term so vague that it loses all usefulness in conversation. Wikipedia gives 5 different definitions.

https://en.wikipedia.org/wiki/Strong_and_weak_typing


Dynamic languages can be strongly typed. Meaning a dynamic language can have better type safety then a static language.


The way to ensure minimal failure rates in code is a top notch test suite. Typed or untyped language doesn't matter.

Before test suites, I would have agreed with you.


Test suites are one leg of the stool. The others include memory safety, type safety and (for real-time applications) resource guarantees, as well as a formal development process that includes rigorous code inspections. Put everything together and you have "space shuttle computer" level investment required. As you give up legs, the stool becomes cheaper yet not as sturdy.


The problem with that is the economics for many applications don’t favor elaborate test suites. If you’re building a saas software product where you can detect a bug, fix it, and deploy in a matter of hours, and wherein developer velocity is among your most important metrics, an elaborate test suite is a liability (note that the suite spot is not “no tests” but somewhere in the middle) as it makes changes slower. Dynamic languages work reasonably well for these categories of applications, although some static typing can help improve that iteration time.


Having done TDD for 10+ years, I've come to realize the main value test brings is not the error checking, nice as it is.

The biggest value to me is that you can easily do huge refactorings with a lot of confidence. That in turn means you can keep redesigning your code and frameworks long after they would ossify into legacy code no one dares change in a normal project.

That's not to say I disagree that much with your point.


If you give someone $1000 for each bug they find they would find tons of bugs. So in the end its a matter of time/money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: