Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ending the Era of Patronizing Language Design (objectmentor.com)
25 points by fogus on July 13, 2009 | hide | past | favorite | 24 comments


There's value in being able to tell the system "I do not want to do X. If I seem to be doing X, it's a mistake, so please stop me." People really do make mistakes, and there's nothing condescending about building a system to accommodate that. That's why my car's steering wheel has an airbag rather than a punji stick.


You are right and I agree with you, but I believe the point of the blog post was about the language designer saying "You cannot do this because it is too dangerous."

That is different than an application designer saying "Within the confines of this application, I will never do this so flag and prevent if it seems like it is."


I don't think that's really valid. How many recent languages expose raw memory access? The trick to building a great computer system / language is exposing the aspects that are most useful while hiding everything else.


But the aspects that are most useful to you are different to the aspects that are most useful to me. Recently I've been trying to write a wrapper for a C library in a couple of different languages, and raw memory access would have made my life much easier. But the languages didn't have it, making my work more difficult.

I think the (paraphrased) aphorism "Make the common, simple. Make the hard, possible" is apt.


the aspects that are most useful to you are different to the aspects that are most useful to me. Which is why we have so many languages. Also, we don't really need another C or Pearl at this point, but new domains do show up. Such as GPU's with insane amounts of parallelism.


There's value in being able to tell the system "I do not want to do X. If I seem to be doing X, it's a mistake, so please stop me."

Frequently in the real world this sounds like "I am about to call out to code written at our valued partners in India. JVM, please make sure that the most they can do is touch the object I'm about to pass them. I can test the object for correctness when I get it back, I don't want to have to check to see if they redefined Array#sort as well.")

A lot of the things that irk skilled programmers about Java (I know what I want to do, why do you persist in stopping me?!) make a lot of sense when you're working with a geographically dispersed team of dozens, many with sharply disparate skill levels.


Taking the car analogy further: Plenty of people buy cars that will go faster than they will ever need to go.

The car designer ALLOWS you to do it, even if it's not usually safe.

The happy customer knows it's there if he needs it.


"The fact of the matter is this: it is possible to create a mess in every language. Language designers can’t prevent it."

I hate stuff like this. It's not a bool, it's a float. Language features (e.g. type checking) help eliminate some errors, reduce the likelihood/frequency of more, and reduce the cost of finding others. Coercing reduction factors into absolutes destroys the conversation for a cheap debate point.


> C++ avoided reflection

Not really, its architecture simply didn't really support reflection. C++ actually lets you do almost anything imaginable and then some.

Overall, though, I agree with the premise in part. Certain languages (Java, C#, and to a much lesser extent Python[1]) try to remove features they consider "evil", rather than let the developer have all the rope to tie themselves. What they fail to account for is that we're developers, we should be competent enough to know the dangers of certain features and we should be competent enough to avoid them if they're not needed.

Languages that let you do whatever the hell you want aren't a new thing, though. C, lisp, etc have existed for awhile. If at all, the idea of patronizing programmers is new.

[1] Guido van rossum has said many times that there should be only one way to do anything, and I think you see that with the crippling of lambda in Python (Guido favors named subfunctions). However, Python still has many "dangerous" features if you need them.


lambda being crippled in Python is the same story as with C++ and reflection.

It's pretty simple: Python's syntax doesn't let you have a statement inside a statement. Even if it did, handling blocks would be a nightmare!


I see your point, and it's a valid one, but then again, Boo managed to work around it.


Python's lambdas aren't functions/subroutines, they're callable expressions.


C++ does not omit features because they were worried that programmers would misuse them. This is, afterall, a language with multiple inheritance, allows arbitrary casts and allows programmers to grab a pointer to any arbitrary piece of memory. Stroustrup has said in many places that, in the end, he trusts the programmer and is in favor of including useful features despite their danger.

Full reflection doesn't exist in C++, I assume, because of the runtime overhead costs. The amount of reflection that is present through Runttime Type Information (RTTI) does add overhead, so it's only compiled in when programmers use those features.

Further, I'm not a Ruby programmer, but from reading HN I know there's significant discussion in the Ruby community about monkey patching arbitrary classes. The consensus I've seen is that it, in general, it's a bad idea. This is the same kind of cultural proscriptive advice the author says is not present in the Ruby culture.


"The consensus I've seen is that it, in general, it's a bad idea. This is the same kind of cultural proscriptive advice the author says is not present in the Ruby culture."

From my corner of the Ruby universe there's not even consensus about using the term "monkey patching", or when to apply it. (I find the phrase more confusing than enlightening, as most of the time people are not making a distinction between a deliberate coding decision to re-open a class for a specific result, versus employing the technique to hack pre-existing code you cannot otherwise properly alter.)

About the closest thing I find to consensus is, "Don't be dopey when re-opening classes", especially if you are writing code for unknown end users. But I know of few people who frown on ever re-opening classes.


Don't mistake HN "consensus" with Ruby community consensus. I've been very loud on HN about how monkeypatching is a bad idea, but I am not a member of the Ruby community. (I got my burned-by-monkeypatching credentials from Javascript.)

I never was really in the market for Ruby since I know Python very well, but certainly Ruby's love affair with monkeypatching would be sufficient by itself to turn me away.


I got my burned-by-monkeypatching credentials from Javascript

Ouch. I cringe just thinking about it.

(In Javascript you can do some truly ugly and unnatural things mixing up what people expect and what they get.)


I'm having trouble seeing your point here. You seem to be saying "features present in my language of choice are right and proper, while features not present were omitted for good and sufficient reasons and were a Bad Idea to begin with."

Also, there's a real muddle in your third paragraph. Are you claiming that Ruby culture is comparably proscriptive to Java or C++ culture? If not, it's tu quoque and the original author's point stands. But if your intent is to refute the central premise of his argument, you should really say so.


I'm not necessarily saying the features omitted from C++ were omitted for "good and sufficient" reasons, and I certainly didn't say they were a bad idea. What I am saying is they were omitted for reasons other than the author states. (The reason I'm not saying they are good and sufficient is that starts a discussion about the design of C++, which I think is too far off topic.)

I can't speak with confidence about Ruby's culture. But the talk I've seen about not freely modifying arbitrary classes seems to run counter with the author's main point, since that is the sort of freedom he's talking about. That sort of proscription reminds me of warnings against global state in procedural programs, and shared state in parallel programs.


This way of thinking about language constraints misses a key point: language constraints are not only there just to bug the programmer into behaving better. They also provide a way of reasoning about the program to a person trying to understand it. The guarantees that exist in, say Java code, mean that a new person can say with much better precision what any given piece of code does (not what it's meant to do - what it will actually do). In a dynamic language, it might do anything.

I experience this exactly in the dual Groovy-Java universe I inhabit these days. I am faster at writing the groovy code and I think the end result looks nicer and smaller. But when there's some strange problem that needs to be debugged that I don't understand, the java code is much much easier to reason about. Simple questions like "how is this function possibly getting called" can be answered definitively in a fraction of a second with a key-press in Eclipse for java, while I have to run a virtual simulation of the program in my head to figure out what the groovy code might do.


Our team develops a .NET class library. As a game platform, we have the luxury of breaking binary and source compatability between releases because games are atypically short-lived applications. We provide side-by-side frameworks on both Windows and Xbox, so you don't need to upgrade to the latest framework before you release your game. But that won't stop people from doing it and still complaining.

When Microsoft releases a new version of a library, some subset of users upgrade instantly. They expect that all of their code to keep working. When we break source compatability (something most teams NEVER do), we do it very carefully. We only break things where there is a clear, large win and the upgrade path is straightforward. Customers have complained when we do this every single time because it is not what they inherently expect.

So now back to the original topic at hand: the "After all, we're all consenting adults here." mentality. Yeah, that's great and all, but it requires an incredible amount of discipline. See, for example, how the Django project handles this: <http://code.djangoproject.com/wiki/BackwardsIncompatibleChan.... Personally, I love doing things this way. However, consider a large non-software company. They created some internal app and ran into a small bug or shortcoming of a class library. They find out that the new version fixes their problem, so they they upgrade. BAMN. 10 other things break. Ugh, oh. that's not good. They already laid off the dev team that made this program, they have one vendor making a couple quick fixes to handle their new 2010 business rules. This used to work, why are you breaking me? I just want that one new fix!

In order to prevent situations like this, you need to carefully monitor your public surface area. This is why the _foo naming convention exists in Python. It lets you say "We explicitly do not promise not to change this." Believe me, if you make a type public or a method virtual, some one will instantiate it or override it. I've seen it many times. If you didn't explicitly plan for them to do that, you've still got a backwards compatability bug should you change it. Even if you have it private, someone might call it with reflection and STILL get annoyed when you break them, although this is far more rare.

So in summary: this cultural distinction is a meaningful difference in requirements between open source developers and commercial developers. The respective languages have evolved to meet the needs of their respective customers. One is not necessarily better than the other.

This is why C# 3 and Python are tied for my favorite language :-)


Doh ... "the newer breed of languages". Like Lisp?


Lisp has been one of "the newer breed of languages" for over 50 years now, and that run does not seem likely to end anytime soon.


"Lisp doesn't look any deader than usual to me." - David Thornley


"Lisp is not dead, it just smells funny."




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: