> for the simple reason that Java is designed to be mainstream,
Not really. Java was designed to be practical, and it ended up being mainstream.
Most languages want to become mainstream (including Scala), most of them just fail to achieve that goal for a variety of reasons, often due to their design decisions (but not always).
> Most languages want to become mainstream (including Scala), most of them just fail to achieve that goal for a variety of reasons, often due to their design decisions
A language will often become used in one particular domain, often not the domain it was designed for. Take another JVM language Groovy -- it's used a lot for scripting throwaways used in testing, including build files for Gradle, and was extended with a meta-object protocol for Grails which has risen and fallen in sync with Rails. But then its backers retrofitted it with a static typing system and have tried to promote it for building systems, both on the JVM and Android but virtually no-one's biting.
I disagree, Java was designed to be marketable to the layman and for a very good reason ...
Software companies can scale either horizontally (e.g. hiring more people) or vertically (hiring the best). Because of the classical bell curve distribution, really good people are hard to find, therefore horizontal scalability is very much preferred (and obviously not everybody can have "the best"). Unfortunately you can't really grow a company both horizontally and vertically. The reason for it is because getting good results from mediocre people requires lots of processes, coordination/synchronization and politics of the sort that doesn't appeal to really good people and that's because really good people have a choice, because of the supply-demand equation which favors them.
In the wild you'll also notice another reason for why horizontal scalability is preferred. Corporate departments and consultancy companies are usually scaled horizontally because they get paid per developer/hour and unfortunately there are only 24 hours in a day. Hence the bigger the team, the bigger the budget. On the other hand small teams happen in startups where you often see 2-3 people doing everything. Small teams also happen in big software companies, when speaking of either privileged seniors or folks that handle real research.
Big teams are getting things done, as Alan Kay was saying, on the same model as the Egyptian pyramids, with thousands of people pushing large bricks around. Small teams on the other hand tend to handle complexity by means of better abstractions. Unfortunately better abstractions are harder to understand, as they are first of all, unfamiliar.
So back to Java ... it's actually a pattern that you tend to notice. Such languages are afraid of doing anything that wasn't proven already in the market. The marketing for such languages also relies on fear and doubt of the alternatives, which Sun took advantage of, appealing to the insecurity of the many. This is one language that refused to implement generics because they were too hard and then when they happened, the implementation chose use-site variance by means of wildcards, with optional usage, possibly the poorest choice they could have made. Generic classes are invariant of course, except for arrays. This is the language that doesn't do operator overloading, as if "BigInteger.add" is more intuitive than a "+", but on the other hand it does have a + for Strings. This is the language in which == always does referential equality, so you've got tools converting every == to .equals because that's what you need most of the time. The language which until recently didn't have anonymous functions because anonymous classes should be enough. This is the language in which everything needs to be in a class, even though the concept of "static method" makes no sense whatsoever. This is the language in which at least 30% of all libraries are doing bytecode manipulation to get around the language's deficiencies. Such languages also avoid metalinguistic abstractions like a plague, such as macros, because OMG, OMG, people aren't smart enough for that (though truth be told, you're expanding your language every time you write a function, but don't tell that to the weak). Also, AOP is a thing that has a name in Java, which is pretty funny, given that in other languages it tends to be effortless function composition.
As an environment it turned out to be great, eventually. If you want another shinning example, take a look at Go. Same story.
In other words Clojure or Scala will never reach the popularity of something like Java. Sorry, people can wish for it, but it will never happen. And that's OK.
I disagree. Java's design wasn't timid -- it was radical in its intentional reduction of expressivity compared to the most popular language around, because the much more expressive language was proven to be far too costly to maintain for the duration software has to be maintained.
But you are certainly right that Java was also meant to be familiar and unthreatening. It allowed itself to do that because it realized -- and was proven right -- that extra-linguistic features such as a GC and dynamic linking are essentially the features that contribute to productivity more than abstractions.
> Software companies can scale either horizontally (e.g. hiring more people) or vertically (hiring the best).
While this may be true, it has little to do with Java directly. For years I worked with the best algorithmeticians, physicists and mathematicians (physics simulation, real-time control). We were very much involved with the hard problems we were trying to solve with novel algorithms, and frankly couldn't be bothered with spending precious resources on elegant abstractions. Our problems was one of essential complexity -- not accidental complexity -- so we needed a simple language that can be easily shared and understood by everyone, one that was fast, and one that got the job done.
Believing that better developers opt for more powerful languages is what I call the "Blub developer paradox": the blub developer has only worked on simple, blub, problems (CRUD applications, web etc.), and hence believes his cool abstractions are the mark of the "smart developer". If he'd only look at developers working on seriously hard algorithms, he'd realize that abstractions are secondary, and it is developers above him on the essential complexity spectrum that opt for simpler languages with weaker abstractions.
That better developers choose the more powerfully-abstracting languages is completely and utterly false. The best developers I've ever known -- those who really came up with radical, game-changing solutions that truly advanced the state-of-the-art -- were all C and Java people, and I don't think it is a coincidence. Their mind is focused on the algorithm rather than on alluring, but ultimately distracting abstractions.
However, saying that only people who work on boring problems have the free mental resources to spend on nifty abstractions is as valid a generalization as the one you've made. There are many reasons to choose a particular programming language.
Personally, though, I believe that while the choice of the programming language matters, it matter so much less than design, choice of algorithms and availability of other tools. I no longer equate software with the code it's written in. A running program has many factors affecting its quality and cost, and the code used to write it is just one of them. Still, if I need to write a web app, I'd choose Clojure over Java every time. If I need to write a military command-and-control app, or a power-station control app? I'd probably go with Java.
You're attributing to me and my message things I haven't said, which is a sign of an unproductive conversation.
> For years I worked with the best algorithmeticians, physicists and mathematicians (physics simulation, real-time control)
If we are talking about experience, I've built an RTB system with soft real-time requirements handling tens of thousands of transactions per second and in which Scala was awesome because we used better abstractions for handling multi-threading, concurrency and processing of huge amounts of data in real-time.
Right now I'm working on a project for E.On that monitors and controls power plants. We are talking about real-time processing of signals, modeling of state machines that evolve according to those signals and machine learning for finding the optimal dispatch algorithm and for better fault detection. Scala has been awesome because functional programming along with its very static type system allowed us to better handle the event-based nature of the system and the business logic which is like a freaking fractal of complexity.
I'm now moving on to another project from the health-care industry that handles, you probably guessed it, monitoring of patients by analyzing signals coming from medical devices. Also in Scala, though there's lots of flexibility required in configuring the system and I might propose Clojure for some parts because read-eval.
Thinking that you're the only one that interacted with "the best algorithmeticians, physicists and mathematicians" or that your problems are more interesting than what other people are doing is snobbery. I never said that the best people choose different languages, all I said is that the best people choose better abstractions and that some languages are meant for building better abstractions. Speaking of which, for "blub CRUD applications" I would never choose something like Clojure or Scala, simply because for those types of apps PHP or Ruby have got that covered.
I don't think I'm the only one etc.. It's just so happens that all the best people I've known to work on algorithms preferred languages with minimal abstractions. I very well acknowledge that others, who are no less talented, might prefer more abstractions.
However, I am saying that the assumption that better developers invariably prefer more abstractions and more powerful languages is absolutely wrong.
Personally, I've had a terrible experience with Scala, found its abstractions distracting, its complexity hindering readability, its DSLism obstructing code sharing and staff migration among various teams, and the grapple with the tool stack a horrible waste of time -- all with little benefit and lots of harm. I know that my experience isn't universal, but it's far from unique, either.
Clojure is a completely different story, partly because its designers are not purists and not interested (at least not as a primary concern) in PL research, but even with Clojure I don't think we have enough data to conclude that its cleverness (which, thankfully, is much lower than Scala's[1]) is a net gain for large projects that require maintenance for a decade or more by large teams.
All in all, I've seen that a solid, fast runtime with a good compilation and linking story, excellent profilers and deep monitoring (all provided by the JVM) yield much bigger gains than most new languages. Given that those extra-linguistic features are free and switching a language is extremely costly, I'm always hesitant about whether it's worth it. Staying on the JVM reduces the switching costs, and good interoperation with Java -- like Clojure has -- reduces it further, so I find the risk to be worth it for some less-risky projects.
[1]: Then again, so is every other language in history.
> That doesn't mean other people can't use Haskell to build web applications.
Very few companies do, though, so OP's point still stands.
This article would be a much better ad for Haskell if it weren't written by one of GHC's main author. The article also casts doubt on whether Haskell was picked because it's the best tool for the job or because of the team's familiarity with it.
Do you know a better language for implementing an eDSL that needs to be pure? Also familiarity does matter. With the requirement of needing a pure eDSL though I think Haskell would be the easiest path for most teams.
It's always struck me as extremely bizarre that a company that regularly advertises that it's at the bleeding edge of software engineering practices (see Spolsky's numerous blog posts on the topic) made such a colossal error as writing their own language, and that it took them a decade to realize this mistake.
I also find this kind of phrasing weird:
> The people who wrote the original Wasabi compiler moved on for one reason or another. Some married partners who lived elsewhere; others went over to work on other products from Fog Creek.
It's like the author of this article goes out of their ways to avoid saying that some people left the company, period. It also wouldn't surprise me if some of these defections were caused by Wasabi itself. As a software engineer, you quickly start wondering how wise it is to spend years learning a language that will be of no use once you leave your current company (yet another reason why rolling your own language as a critical part of your product is a terrible idea).
It has always struck me as extremely bizarre that computer science graduates would recoil from someone solving a business problem using what appears to be very basic compiler theory.
The second half of your comment transitions from weird to mean-spirited, as you begin speculating about people you don't know and their reasons for changing jobs. I'm a little confused as to why you've been voted up so high on the page.
I still think the fact that even most people with a 4-year degree still haven't done a compilers course is the core problem. Both other times we've had this discussion I haven't noticed anyone popping up to say "Yeah, I've written like 3 compilers and Wasabi was just an insane idea." (Of course, the Internet being what it is, someone will probably say that now. But the point is I haven't seen it before I asked for it.) A lot of people are doing the cost/benefit analysis with an order of magnitude or two too much in the "cost" column. Yeah, of course it looks insane then... but the problem is the analysis, not the reality.
Compilers just aren't that magically hard and difficult. I'll cop to not having written a true compiler yet but I've written a number of interpreters, and I've written all the pieces several times (compile to AST, interpret, serialize back out, just never had the whole shebang needed at once).
If you're reading this, and you're still in a position where you can take a compilers course, take it! It's one of the most brutally pragmatic courses in the whole of computer science and it's a shame how it's withered. (Even if, like me, you'll probably write more interpreters than compilers. And nowadays you really ought to have a good reason not to pick an existing serialization off-the-shelf. But it's still useful stuff.) It's one of those things that is the difference between a wizard and a code monkey.
I've written like 3 compilers, and while I don't think Wasabi was quite insane (they had an interesting set of constraints, so I could at least follow the logic), it's not the choice I would've done. Or rather, it's totally the choice I would've done as a fresh college grad in 2005 having written my first compiler for work (which was ripped out in about 2 months...it didn't take me that long to realize my mistake), but it's not what I would've done with the hindsight experience of that and other compiler projects.
The cost of an in-house programming language isn't in writing the compiler. It's training all your new team members in the language. It's documenting the language constructs, including corner cases. It's in not being able to go to Stack Overflow when you have problems. It's in every bug potentially being in either your application code, your compiler, or your runtime libraries, and needing to trace problems across this boundary. It's in integrating with 3rd-party libraries, and in not being able to use tooling developed for an existing mainstream language, and having to add another backend to every other DSL that compiles to a mainstream language.
All that said, I agree that if you're ever in a position to take a compiler course, do it. It's one of the most valuable courses I ever took, and really peels back the mystery on why programming languages are the way they are. It's just that the difference between wisdom and intelligence is in knowing when not to use that brilliant technique you know.
"It's just that the difference between wisdom and intelligence is in knowing when not to use that brilliant technique you know."
Which is precisely why I've never written a full compiler, even though I've written all the pieces many times.
For instance, instead of writing a parser, could you perhaps get away with just a direct JSON serialization of some AST? Do you really need to emit something, or will an interpreter do? So far I've never been so backed against the wall that I've actually needed a full compiler.
Yeah, one of the compilers I wrote just used JSON as the AST, with it being generated by a GUI interface. Another used HTML with annotations (although go figure, I wrote an HTML parser [1] for it, because there weren't any C++ options at the time that didn't bring along a browser engine). A third had a custom front-end but then emitted Java source code as the back-end.
The interesting thing is that the more experience you get, the more alternatives you find to writing your own language. Could you use Ruby or Python as the front-end, much like Rails [2], Rake [3], or Bazel [4]? Could you build up a data-structure to express the computation, and then walk that data-structure with the Interpreter pattern? [5] Could you get away with a class library or framework, much like how Sawzall has been replaced by Flume [6] and Go libraries within Google?
In general, you want to use the tool with the least power that actually accomplishes your goals, because every increase in power is usually accompanied by an increase in complexity. There are a bunch of solutions with less power than a full programming language that can still get you most of the way there.
I'm doing this now, for a crappy language and a crappy processor. It's been a nightmarish hellscape of a project, but also very expanding. Highly recommend.
(If you're interested in goofing around with Starfighter, you're going to get an opportunity to get handheld through a lot of this stuff.)
Assuming I could transfer the benefit of hindsight back to Joel's position in 2005, including all the knowledge of how the market has evolved over the past 10 years? I would've jumped on the SaaS bandwagon, hard, and converted the existing the existing VBScript codebase to a hosted solution, discontinuing support for the PHP/Linux version and freeing the company up to migrate code as it wished on its own servers.
I recognize that this would've been a huge leap for anyone in 2005, when 37signals was basically the only company doing small-business SaaS and the vast majority of companies insisted that with any software they buy, they actually buy it and the source code and data sit within the company firewall. Heck, when Heroku came out in 2007 I was like "Who the hell would use this, turning over all of their source code to some unnamed startup?"
But looking at how the industry's evolved, that's pretty much the only way they could've stayed relevant. Many companies don't even have physical servers anymore. That's the way FogBugz did evolve, eventually, but they were late getting there and had to back out all the existing Wasabi code and fixes they made for it to be easily deployable (which was one of their core differentiators, IIRC; they were much easier to setup than Bugzilla or other competitors).
It makes me appreciate how tough the job is for CEOs like Larry Page or Steve Jobs, who have managed to stay at the leading edge of the industry for years. Larry was pretty insane for buying a small mobile phone startup called Android in 2005, but it turned out to be worth billions eventually.
Tangent: Your description of how people resisted SaaS a decade ago makes me wonder if the only reason the industry did eventually move toward SaaS was that most on-premises apps were such a nightmare to deploy. After all, some of the disadvantages of SaaS, such as lack of control over one's own data, are real. If Sandstorm.io had existed back in 2004, might we have avoided SaaS altogether? (Of course, if Sandstorm.io had existed back then, Fog Creek would still have needed to port FogBugz to Linux.)
I think the move to SaaS was a combination of factors:
1. The primary product of many companies got too large to deploy on their own server farms, and so they started moving toward AWS etc. for scalable hosting. Once your product is in the cloud, it makes sense to deploy your supporting infrastructure & tooling there as well, because otherwise you're paying the support, hosting, & sysadmin costs for just your non-critical corporate infrastructure.
2. Bandwidth became a non-issue. In the 1990s there was a very measurable difference between 10BaseT internally vs. an ISDN line to your hosting provider. In the 2010s, there's little practical difference between gigabit Ethernet vs. 10M broadband.
3. HTTPS became ubiquitous, taking care of many security risks.
5. Employees started to blur the line between work and home, leading to demand for work services that could be used, encrypted, from a user's home network. VPNs were a huge PITA to set up. This was a big issue for much of the early 2000s; one of my employers made some clever network software to punch through corporate firewalls with a minimum of configuration.
6. Development speed increased. SaaS companies could push new versions of their product faster, react to customer feedback quicker, and generally deliver better service. Because all customer interactions go through the company's servers (where they can be logged), they have much better information about how people are using their products. Deployed services were left in the dust.
tl;dr: #1-4 made lots of businesses go "Why not?", while #5 and #6 made them go "Yessss."
It's interesting that many of the arguments about why you should not use SaaS businesses now (like privacy and security, and lack of ownership) were relatively minor reasons then. I do kinda wish (in an abstract way) that something like Sandstorm would catch on, but I think they may be early: SaaS just isn't that painful, and until we have a major shake-out where a lot of businesses get taken out because their dependencies go down, it seems unlikely that it will become so. Or the other way this could play out is that a new powerful computing platform comes out that lets you do things that aren't possible with thin clients, and you see a rush back to the client for functionality.
All very good reasons. I'll add another - accounting.
The monthly bills for small purchases of SaaS fits on what could be expensed on a corporate card. By the time IT gets wind, the product has already infiltrated the organization. If there's a very large up front cost, then IT is involved, you need a formal RFP process, lots of people weigh in, those opposed to the purchase can try and block it... As soon as "Put it on the corporate card" became viable, power moved back to the business units.
With Sandstorm, we could actually get that effect on-prem. Since no technical expertise is needed for deployment, and since the security model is so strong, and the IT department will be able to manage resource quotas on a user basis rather than an application basis, it's actually entirely reasonable that people outside of IT could be permitted to install software without IT approval.
Granted, it may take a while to convince IT people that this is OK, but fundamentally they have every reason to prefer this over people cheating with SaaS.
Actually, not that late. I think their main problem was that the environment changed around them. Besides SaaS, the whole developer ecosystem changed as well: when I look at who really won the bugtracking market, it's GitHub, who added it as a feature on code hosting.
If winning the bugtracking market was the goal, they probably would've taken VC money. You may notice that everyone who's in a position to claim that has done so (Github, Atlassian, etc).
They did learn from this, as you can see by the very different paths StackExchange and Trello are on.
Joel wrote an essay about this. [1] His basic thesis is that organic growth wins over VC when there are entrenched competitors, few network effects, and little customer lock-in. VC wins when there are wide-open markets, strong network effects, and strong customer lock-in. Stack Exchange's investment was consistent with this thesis [2].
The developer tools market changed from one with very few network effects to one with a large network effect around 2010. The drivers for these were GitHub, meetups, forums like Hacker News, and just its general growth - they made coding social. When I started programming professionally in 2000, each company basically decided on a bugtracker and version control system independently, and it didn't matter what every other company did. By 2015, most new companies just use git, they host on GitHub, and if they don't do this, they're at a strong disadvantage when recruiting & training up developers, because that's what much of the workforce uses.
Interestingly, both GitHub and Atlassian resisted taking investment for many years - GitHub was founded in 2007 and took its first investment in 2012, while Atlassian was founded in 2002 and took its first investment in 2010.
Right! And this isn't even a compiler, the way most people think of "compilers". It's a transpiler to three target languages each of which has an extraordinarily full-featured runtime. Two of which are (a) widely available and (b) as source languages, awful.
And it's not their own language, it's an extension of VBScript. And now that the tools around C# are better and Linux support for .Net is official, they have used these tools to transition to C#. Like you, I don't get the outrage.
Do you think that the name Wasabi contributes to the outrage?
Coffeescript has a similar name to Javascript, so you can quickly draw an association between the two.
The name Wasabi doesn't have an obvious connection to the VBScript that it's based on, which seems to be the cause of people talking about writing a whole new language, etc.
I've written some toy compilers, and I can at least say:
1. compilers have bugs
2. it really sucks not knowing if a bug is in your code or in your compiler
3. it sucks not having a source-level debugger
Anyone can write a simple compiler, just like anyone can make a simple database. The hard part (at least for a non-optimizing compiler) isn't the comp-sci theory, it's making the tooling around, and the extensive amount of testing needed to be sure you don't have subtle data corrupting bugs lying around to bite you.
I won't categorically reject the idea, for instance I think Facebook writing their HipHop compiler was completely defensible. But you need people with compiler experience, and people who know the pain of working with crappy, undocumented, buggy toolchains to make that decision, not people who once took a compiler course.
I've written like 3 compilers* and Wasabi seems like it was probably a reasonable solution for the problem they had at the time. Compilers just aren't that magically hard and difficult.
There are very few situations where writing your own langue and toolchain is a good idea. I used to work on a proprietary company language that was actually a compiler generator for language-to-language translation, plus a bunch of other stuff, and it was a horrible pain.
Documentation? None
Online community? None
Transferability of skillset? None, apart from knowing how compilers work. Makes for good nerd conversation, but that's it.
Writing your own toolchain is almost as bad. I've seen multiple talented people leave companies I've worked at when they were forced to build and maintain horrible tools for the in-house ecosystem. Some too-big-for-his-britches second-system-as-a-first-system ass had written them, and everybody else got stuck with it.
As the other commenter noted, this seems like epitome of bad software engineering and I'm surprised employees put up with it if they were any good.
EDIT: I learned to program in assembly, so compilers didn't seem super mysterious to me as they are for someone who learns Java first perhaps.
Can't you say the same things about a proprietary database, or a proprietary template language? What are the kinds of computer science that we can safely deploy without taking extra precautions to document and maintain it?
Both of those should be looked upon with suspicion. I can't say "never do it", given that every employer I've ever worked at has had its own proprietary database, and one of the projects I worked on at Google was a proprietary template language. But all of them were a large maintenance burden, much larger than originally anticipated.
I think the old business adage about "In-source your core competencies, outsource everything else" applies here. If you derive a big competitive advantage from having a proprietary database or proprietary template, and it generates enough revenue to afford a dedicated team of experts to maintain it, build it. But if you have a bunch of smart & motivated developers who can build a proprietary database, but your product isn't databases or templates and your core differentiator isn't the performance or query patterns you get from building it yourself? Put them to work improving the product, and work with the infrastructure that other firms have built already.
i'd actually be way more suspicious of a proprietary database, unless there was a very compelling reason why none of the existing ones worked. maybe this is just my inexperience in the field, but a database engine seems orders of magnitude harder to get right and maintain than a compiler (that too a transpiler, so you can even human-inspect the output!) does.
Yes. Any proprietary system will require you to document/educate the users, and you will not have the benefit of an online community to get help from, or bug fixes, or security analyses. There are very few problems where rolling your own solution is the right solution. Maybe if you are Google and your database isn't big enough or something.
If you have great people building the software, or at least competent ones, and you have competent users, you might succeed, maybe. But that's assuming you have a maintenance plan and a roadmap, which most software companies do not. Maintain software? YOLO! What happens when you have a bunch of morons using and maintaining the software?
In short, computer science in industry is largely practiced as shamanism by people who cannot engineer their way out of a crackerjack box.
"There are very few situations where writing your own langue"
Well, I can see how you might struggle there ;-)
Good natured snarks about spelling aside, part of the issue is that writing, documenting and maintaining your own language is only hard if your toolchain sucks.
If you're interested in writing a specialized language to solve a particular problem, take a look at PEG for JS, and either Racket or Common Lisp (the latter if you need native compilation).
I've recently been involved in the design and implementation of an English-like language for the expression of business domain concepts in web apps. It's a great approach if done thoughtfully and professionally.
That's probably the key, actually. The horror stories we hear are of the bad examples. And we all know that shitty tools, weak languages and bad documentation can come out of large software companies as commercial products as well.
I didn't take a course on compiler construction, and now I don't remember if my university's CS department had one (it was a fairly mediocre CS department at a state university). Now I wish I had.
Do you think a good compiler course would prepare the student to do a project with the scope and complexity of Wasabi? For one project, I wrote an interpreter for a little domain-specific language, then later reworked that interpreter into an on-the-fly compiler (to Lua, to avoid double interpretation). But that's a long way from writing a compiler for a general-purpose language, that can do global type inference and produce human-readable output in a target language that's fairly different from the original VBScript (if not Wasabi itself).
The trickiest bit of Wasabi is the type inference, which I admit is not "production-ready" (or "good code") because we basically invented it from scratch. If I were to do it now, I would know just enough to realize that I need to read about Hindley-Milner rather than reinvent the wheel.
Producing human-readable output is an exercise in tedium and bookkeeping, not any particular amount of skill or brilliance.
Thanks for confirming my guess that the type inference was the trickiest part. These days, I guess Flow (http://flowtype.org/) would also be worth studying. Edit: Or PyPy's RPython.
I imagine most people upvoted because it sounded smart.
It's fascinating how easily cruelty can be popularized by using the right, nice-sounding words. Coating/mask your bile in a rhetoric popular with a community, indirectly imply some terrible things, perhaps obfuscate anything that could raise any uncomfortable, thoughtful questions, and presto! You'll have the right set-up to manufacture consensus.
It's funny because what got me is how dumb it sounded: the words "such a colossal error as writing their own language" made me think "that's the kind of thing a mid-level IT manager at an insurance company would say".
It is usually a "colossal error" to write your own in-house closed-source/proprietary language, no matter how small or large the language is.
The main reason is exactly as the article states, maintainability.
> As time wore on, our technical debt finally began to come due. Compilers like Wasabi and their associated runtime libraries are highly complex pieces of software. We hadn’t open-sourced it, so this meant any investment had to be done by us at the expense of our main revenue-generating products. While we were busy working on exciting new things, Wasabi stagnated. It was a huge dependency that required a full-time developer — not cheap for a company of our size. It occasionally barfed on a piece of code that was completely reasonable to humans. It was slow to compile. Visual Studio wasn’t able to easily edit or attach a debugger to FogBugz. Just documenting it was a chore
What am I missing? This is an internal language designed as an incremental improvement over VB that gave them cross-platform common codebase. It lasted 10 years: that's 19,932 in SaaS years. When they transitioned off of it, they did it not with a rewrite, but with mechanical translation.
Colossal error vs spectacular success story. Yay hyperbole.
I'd say the actual experience is somewhere in between. Sure it enabled them to support client requests to be cross platform and proved useful for a very long time, but what was the broader opportunity cost? Did supporting this proprietary infrastructure eat up resources and prevent them from exploring other ideas? Probably.
Big projects are routinely ported once or twice to new languages. Normally, moving a project from one language to another entails a full rewrite. Because the process of building their own language had the side effect of instrumenting all their code (the transpiler was custom-designed for their application), they were able to write a program to do the port for them. That's a win I don't see captured in the original blog post, and that's all I'm pointing out.
I don't think it ate up any more resources than, say, MySQL support. Fog Creek employees have produced tons of innovation since Wasabi was introduced, including brand-new companies like Trello (2011/2014) and Stack Overflow (2008), as well as in-house products like Kiln (2008), WebPutty (2011), and Make Better Software: The Training Series (2009-ish). None of these projects were particularly resource-constrained by having to do some compiler and runtime maintenance in the process of building FogBugz.
Thank you for the most insightful comment on this story. I find it bizarre that so many are unable to understand that making the decision to 'kill' wasabi today does not necessarily mean that it was a mistake.
This whole story is a fabulous insight into software development for business over the long-term.
I mean, adding an esoteric tool into your development process -- one where an employee has to write a book about it's quirks -- sounds like a failure for the guys in the trenches.
On practically every software project in the history of software projects that have lasted more than 3 release cycles, there is a person on the team who could write a _${Some Component}, The ??? Parts_ book. Nobody ever thinks to do that, because _${Our Report Generation Library}, The ??? Parts_ is super boring.
The reality though is that line-for-line, a transpiler is probably not much harder to write than a serious report generation tool. I agree with the commenter upthread, who thinks this is a result of people simply never having tried to write a compiler before.
It might be, but for what many would argue are the wrong reasons, maybe even unnecessary.
> This is an internal language designed as an incremental improvement over VB that gave them cross-platform common codebase.
The problem is that at the time, there was already several cross-platform technologies in existence, many of which were being developed in the open. Utilizing one of these technologies would have allowed FogCreek to focus on what they do best, make software. Instead, they took a proprietary uni-platform language and attempted all on their own to make it cross-platform capable - which led to years of maintainability issues.
> It lasted 10 years
They gained an early advantage of not having to throw out the codebase and start over, yet they bought themselves 10 years of technical debt which continued to pose a burden on the small company. Many would argue that biting the bullet early on and switching to an open, community-driven cross-platform language/environment would have yielded much more return on the initial investment.
> When they transitioned off of it, they did it not with a rewrite, but with mechanical translation
Yes, that is an achievement, but again, for the wrong reasons.
I feel like you could take this comment, make very minimal tweaks, and deploy it in any language-war debate. "They succeeded with Golang, but for all the wrong reasons. They gained an early advantage but bought themselves 10 years of technical debt that a more modern cross-platform language would have spared them".
And I feel like when you get to the point where the best arguments you can make against something are isomorphic to the arguments you'd make against mainstream languages in language-war debates, that's a win condition.
Sure, FogCreek thought it was a good idea at the time, but over the years it became a significant burden, to the point they had dedicated staff working on just keeping Wasabi alive. Time was even spent writing an internally distributed book about the caveats of the language!
I know you will dismiss this as "routine", but it's not...
For a small company, this is an enormous waste of time, money, and energy.
A big company like Google or Microsoft can afford to throw developers by the dozen at internal proprietary languages and not even blink -- but according to the article, FogCreek did blink every time they had to dedicate time to fixing it. It took time, money, and energy away from their core business - making software.
That's a lose condition.
FogCreek should have bit the bullet and re-wrote their application in an open, standardized cross-platform system. They would have been able to spend zero time worrying about the language, and 100% of their time worrying about their application. They could hire engineers off the street and have them produce in days-to-weeks instead of weeks-to-months. They would have saved themselves an enormous amount of time, money, and energy invested in a language that is now dead anyway.
It may have seemed like a good choice back when the decision was made, but in hindsight it appears to have been a very poor, short-sighted choice.
> For a small company, this is an enormous waste of time, money, and energy.
I think you have this backwards. A small company that writes a compiler and loses a few weeks of dev time per year survives for a decade, while spinning up various new products.
In another world, a small company rewrites its only source of revenue. 18 months later, they release the rewrite with zero new features and a chunk of new bugs and promptly die, because who's going to buy a product that spends a year and a half going backwards?
> FogCreek should have bit the bullet and re-wrote their application in an open, standardized cross-platform system.
Ah, so you happen to know better than Joel how much resources they had available at the time, how long the rewrite would have taken, how much it would have affected their ability to ship new features?
Fog Creek was a much smaller company back when they wrote Wasabi. Postponing the rewrite until they had more resources to spare was probably a good decision.
I think his point can be summarized as "it's better to have to maintain your software than to have to maintain your software and the compiler for it", which is hard to argue against.
Then again, given that they had the codebase already, writing their own transpiler sounds like it was the best option at the time.
"It is usually a "colossal error" to write your own in-house closed-source/proprietary language"
I'm curious if anybody on this thread who has written more than three or four compilers/parsers would agree with you.
Depending on the task, the only solution to some problems is to write a custom/proprietary language (whether it's closed source, of course, is up to the company).
But "bug tracker" is not the problem that was being solved.
The problem was taking a big pile of legacy code and translating it to more than one platform vs rewriting the entire app from scratch in a new language that was cross platform.
It just happened that the legacy code was for a bug tracker, but it could have been for anything.
It is usually a "colossal error" to write your own in-house closed-source/proprietary language, no matter how small or large the language is.
Really?
I don't think so, having done this once, to the great success of the company. They also wrote their own database. The compiler was maintained by a team long after I left the company.
Software is hard. There are more interesting, and hard, problems than pushing the value of a field from one subsystem to the other.
Whereas, complex systems built on popular OO frameworks never have issues with maintainability.
/me deactivates snark mode
I see the issue exactly the other way around.
If you can build a domain specific language that lets you express concepts in a clear way free of boilerplate or conceptual hackery (ORMs, for example) you will wind up with a much lighter maintenance load than the equivalent functionality built on an off the shelf framework.
Of course, there's nothing stopping you from using the two as appropriate. Simple CRUD app? Rails. Need to express a very complex domain in a readable, easily maintained form? Custom language time.
Most things that have lots of reverse-dependencies require a significant amount of maintenance. Compilers are not much different from "common" libraries, or special-purpose frameworks, in that respect. Also, writing a direct-to-assembly compiler is probably not a wise idea.
I'm not saying you are wrong. Building the era of custom tools just for your company, only to do specific jobs is long gone. This is for a lot of factors of risk in a business.
It would work, if management didn't treat programmers as a replaceable cogs in a wheel. But the day you seek to make the craft of programming a commodity that could be done anyone, you need a ecosystem whose knowledge is available to everyone. Only then would you get reasonable expertise at affordable prices to finish your projects.
The opposite is to make so programmers so special that the knowledge of specific tools is available only to them. This definitely puts programmers in a lot more special position to negotiate pay and other things at will. Because the very existence of a business depends on them.
This is like saying "the era of computer science is gone, we all just wire form fields to database columns now".
The fact that a team building the canonical wire- form- fields- to- database- columns application (if there weren't such a thing as "blogs", bug trackers would be the "hello world" of database-backed web apps) found a reason to deploy computer science is, to me, a beacon of hope that we're not all doing something that's going to be automated away in 10 years.
> Building the era of custom tools just for your company, only to do specific jobs is long gone.
Isn't that the biography of 99% of the open source projects in the big data and distributed processing world? I understand they are open now, but didn't they start as custom tools just for a single company?
It seems like the "error" that Fog Creek made was to not open source Wasabi, though even that seems more like a hindsight has 20/20 vision kind of thing, as open sourcing a project is no small feat, especially to a small software company.
Sorry, but nine-hundred and ninety-nine times out of a thousand, it's a complete and total waste of resources to write a proprietary language or runtime enviroment to solve a non-core business problem.
First: they didn't write a runtime. They used .NET/Mono as their runtime.
Second, you wrote your comment in a text box rendered by an application notorious for being written in a custom language (in fact, it seems like arc:lisp::wasabi:.net).
Third, do you have evidence to support "nine hundred ninety-nine times out of a thousand", or is that hyperbole? Have you worked on projects where people used custom languages? Your argument would be much more interesting if we could read more about your experience with it.
You could view their core business problem as easily creating products that as many people as possible can pay for (since that's how they make their money).
Because it's bizarre and stands out. The clarifying sentence of marrying people that live elsewhere etc screams of "the lady doth protest too much".
There's nothing wrong with saying "Over time, and over the natural turnover that happens at all companies, none of the original Wasabi designers are no longer working at FogBugz".
Sure, some snarky people will make the comment "Oh yeah, I bet they left BECAUSE of Wasabi", but most will ignore them.
By completely negating the possibility that any of those people left for any reasons not involving family, it actually seems to INCREASE the probability that Wasabi was more unpopular within FogCreek than Joel would prefer to admit.
Do you know anyone at Fog Creek, or even anyone who has ever worked there? Did they tell you something that would lead you to believe that a blogger for Fog Creek would, completely unprompted and with no real need, make up stories about why people left?
Or could I just as easily argue, with the same total lack of grounding, that you're a secret shill for Atlassian trying to poison the well? (You aren't, of course, but you take my meaning.)
Good news, I actually do know everyone who was part of the original build of Wasabi! None of them left because of the language. I think this accounts for everyone who was there at the time:
1. Original author left because his wife was going to medical school out-of-country and Fog Creek didn't allow remote work at the time.
2. Second author left because his wife was going to medical school out-of-state and Fog Creek didn't allow remote work at the time (see a pattern?). Later came back because Fog Creek offered remote work. Went on to author the blog post we're talking about.
3. Developer left to go work on Stack Exchange (me!)
4. Developer left to go make the world a better place at Khan Academy
5. 2x developer left to go work on Trello
I think that was all of us. People move on in the course of 5+ years. Turns out most of those reasons don't have to do with programming language.
FWIW, I think Wasabi was a bad decision and I'm not going to defend it. But I really don't like these massive assumptions about people's motivations for leaving.
Can I guess at why you think it was a bad decision?
(a) Too incremental to be worth it, given where the .NET ecosystem was heading
(b) FC couldn't commit the resources required to adequately support a whole language, and it's better to commit to a lower common denominator than limp with a poorly supported language
(c) If you're going to create an additional obstacle to on-ramping employees, it had better be something every project in the company takes advantage of --- like, even if you had built FogBugz in OCaml, that would be a problem since the company is not designed to take advantage of OCaml.
(d) Unless you're getting a truly transformative advantage from a custom language, it's not worth it to be out of a "Google your way out of most problems" mainstream sweet spot
(e) No matter how good the language is, using a different language makes you incompatible with toolchain, so edit/test/debug cycles are needlessly painful
I obviously have no idea if Wasabi was a good decision or not, but a workplace where people are allowed to deploy basic computer science to solve problems is (sadly) an attractive stand-out to me.
So, I'm not David, so I'm not going to pretend to know what his thoughts are, but I'll say that I've always had really mixed feelings about Wasabi.
Let me start by saying that Wasabi as a strategic move was brilliant. If David disagrees there, I'm a bit surprised: FogBugz represented an awful lot of battle-tested low-bug code, and finding a way to preserve it, instead of rewriting it, made one hell of a lot of sense. I'm with you that the general thoughts in this forum that we'd have to be insane to write a compiler are misguided. Wasabi let us cleanly move from VScript and ASP 3 to .NET without doing a full rewrite, and I'd be proud to work at a place that would make the same decision in the same context with full hindsight today.
That said, I think Wasabi made two technical decisions that I disagreed with at the time and still disagree in with in retrospect. First, Wasabi was designed to be cross-platform, but targeted .NET prior to Microsoft open-sourcing everything and Mono actually being a sane server target. At the time, I thought Wasabi should've targeted the JVM, and I still think in retrospect that would've been a much better business decision. I really prefer .NET over Java in general, but I know that it caused us an unbelievable amount of pain back in the day on Unix systems, and I think we could've avoided most of that by targeting the JVM instead. Instead, a significant portion of "Wasabi" work was actually spent maintaining our own fork of Mono that was customized to run FogBugz.
Second, Wasabi worked by compiling to C# as an intermediary language. There was a actually an attempt to go straight to IL early on, but it was rejected by most of the team as being a more dangerous option, in the sense that maybe three people on staff spoke IL, whereas pretty much everyone could read C#. I also think this was a mistake: the C# code was not human-readable, made debugging more complicated (VS.NET had something similar to source maps at the time, so it wasn't impossible, but it was very indirect and quirky for reasons I can get into if people are curious), and that decision meant that Wasabi had all of the limitations both of its own compiler, and of Microsoft's C# compiler. IMHO, these limitations are a big part of why the ultimate move away from Wasabi was even necessary in the first place, since they increased both the maintenance and developer burden.
So from my own perspective, I think that Wasabi was a mistake in that, if we were going to go to C#, we should've just got the translation good enough to really go to C# and then ditch Wasabi; and if we weren't, we should've actually owned what we were doing and written a genuine direct-to-IL compiler so we'd have more control over the experience, instead of going through C#. But I still really do genuinely believe that our going to Wasabi was a brilliant strategic decision, and I think Fog Creek would have suffered immeasurably had we not done it.
I'm particularly interested in your thoughts on Wasabi compiling to C# rather than CIL. What characteristics of Wasabi led to the C# output being suboptimal for human reading and editing? If a compiler is going to output human-readable code, are there any general design pitfalls to avoid?
To add to Ted's comment, the main mistake we made in generating readable C# from the start was using `System.CodeDom` as our code generator, which explicitly does NOT care how readable your output is.
A better idea would have been to hand-code the generator, though of course that would have been a lot of string manipulation as well as a little extra effort.
Roslyn solves both of those issues for us, but it didn't exist until very recently.
Beyond what tedu and krallja pointed out, the debugging required inserting tons of #line markers in the C# output. But a single line of Wasabi could map to multiple lines of C#, making the definition of stepping ridiculous. Throw in that Wasabi necessarily grandfathered ASP globals that C# lacked and you also had fun variable display.
The semantics of wasabi (VB) and c# are slightly different. A fair amount of the code was actually the result of various code generators. It dumped everything in one giant file (delimited by #file markers, though). Nothing intractable, but nothing high priority.
Having done something similar, but entirely different, several times, I'm surprised you didn't choose to slowly refactor the code to be more and more native C# over time. iYou start with 100% Wasabi / 0% C# and slowly work up the native C# parts, in code units, until you reach a level sufficiently high to feel confident to do a final push to switch entirely to C#.
(In my experience, you need to build up an inter-op layer first to make working in C# somewhat sane, but it's usually not hard to identify the necessary helper modules needed. Having the .NET runtime actually is a boon here since the IL is designed for inter-language inter-op.)
Why did you find yourselves maintaining a fork of Mono (versus fixing upstream)? Was it something like forking, although being problematic, had lower impedance than doing the necessary rituals for getting your changes accepted upstream?
You can't exactly tell customers to go check out the latest subversion head and compile it themselves. Changes were pushed upstream, but that doesn't push them to customers. Neither could we ship nightly builds, because who knows what changes get introduced? So we had a fixed release with a pile of patches on top of it.
I think there's a useful lesson here about precision in writing. _Why_ the Fog Creek developers left isn't central to the essay's main point, and it's not interesting in itself. But the author did include it, so clearly he thought it was relevant somehow, but why? One salient hypothesis, in this context, is definitely "they left because of Wasabi".
Well, I don't think they actually did leave because of Wasabi; the chain of reasoning I described above isn't very sound. But it's easy and obvious, and the author could have avoided it by doing a little less.
Sure. As I said, I don't think skepticism is _correct_ here. But a critical reader will always be asking themselves, "why did they write it that particular way?", so as an author you have to continually ask yourself the same question.
I do not know anyone who has (or does) work at FogCreek.
But I can imagine a scenario where an employee leaves due to tech stack woes. The employee may not want to burn bridges during their exit interview by saying, "I'm leaving because this tech stack sucks: it's affecting my long-term career progression (no other companies use Wasabi); management is too slow to adapt despite our repeated kvetching; the 1 full-time guy who knows the language is overworked and doesn't have time for proper training." Instead, the employee just says, "I'm leaving for personal reasons" and everything is wrapped up nicely.
Edit: Glad to hear from other commenter that this wasn't the case at FogCreek. I have known people to leave jobs due to tech stack woes; they didn't tell management the real reasons why the left.
"The people who wrote the original Wasabi compiler moved on for one reason or another. Some married partners who lived elsewhere; others went over to work on other products from Fog Creek."
Unless I'm mistaken, this was all that was written about the reasons people left, which to me does not seem very "bizarre". Sure, he could have included "... and some left for other companies" but there's a difference between omitting the obvious and saying something like:
"Let it be known that all the ones who were truly good programmers, and by extension, real human beings! Only the best stayed at Wasabi. No one who was a good employee left for anything other than a personal reason. No one!"
It's not about the compiler, it's about the practical aspects of the language. The compiler isn't the hard part, the compiler is the easy part. You can write a compiler in a weekend if you want. Using it is a much different matter.
As GP notes, why would a dev bother inflicting upon themselves the brain damage of learning this language which will never be useful anywhere else? It's like requiring the devs at your company to use keyboards with a unique layout or something.
It's not necessarily true that it was an error. Their customers wanted Linux support, and they avoided the greater evil of having to rewrite their code or maintain two codebases (remember - this is ~2000 and there's no .Net or Mono, and all their code is written in old [edit:] VBScript).
This was started 2005, not 2000, it says in the post. .Net had been out for years. They're bleeding edge guys, they would have been looking at .net 2. They used C# as the compiler, with Joel's defence being in 2006:
The greater evil was between rewriting their code in PHP, Java, dealing with Mono or sticking with ASP (which was already old fashioned). Or writing your own language.
From Joel's defence post:
We could use .NET, but then I'd have to pay engineers to install Mono for all our Unix customers, and the .NET runtime isn't quite ubiquitous on Windows servers.
The greater evil was definitely writing their own language.
Obviously, hindsight is wonderful, but they had a lot of people immediately point out it was a bad decision and as they say in their blog post, they ended up having to employ a full time language developer. Installing mono doesn't look so expensive now!
I was running FogBugz on Linux at a particular start-up in either 2001 or 2002, so what you say isn't true. The original project must have been started much earlier than 2005. Edit: See reply below, must be getting old :-(
So Thistle, the transpiler, was written either summer 2003 or summer 2004 (it's not entirely clear if he employed the intern that summer or the next) and Wasabi came later (so 2005 is probably correct).
> I fixed a crazy number of silly bugs in mono’s class libraries. To be fair, implementing all of .NET is a herculean task, and most of the fixes were easy enough to make. The result of these fixes meant that we had to ship a patched, custom version of mono
This was 2007. Using Mono in 2005 does not sound like it would have gone particularly well.
So in a way, the original mistake was made even before FogBasic/Wasabi entered the picture: by buying into the wonderful MS ecosystem, FogCreek condemned themselves to 10 years of hacks for cross-platform support. And they've learnt their lesson so well that they're now consolidating on C#, another de-facto MS-only technology which only benefits from the fact that someone else (Mono) is doing cross-platform hacks for everyone.
Joel is great, but this choice baffled me in the past and baffles me today. For the sort of software FogBugz is, they would have had a much simpler life with Java, Python, Ruby, even Perl. Despite all of Joel's insight into "making html sing", he behaved like an accountant building humongous Excel macros "because that's what we know".
Remember the age of Fogbugz. It was initially released in 2000.
MS Windows was by far the dominant operating system. Virtualization was still in its early stages, and mostly at the desktop level. Linux was still growing in the server market but not dominant as it is today.
And what is exactly is wrong with the MS ecosystem if you're targeting enterprise? There are still a lot of businesses that work exclusively with Windows servers with IT managers that don't want the headache of having Linux servers.
Enterprise software tends to be a notch or two below consumer software in the "it just works department", and my experience with deploying Java based enterprise software was pretty negative. In 2000, not a lot of people were using Ruby, Python or Perl for enterprise web apps. It was mostly ASP and JSP back then.
> Remember the age of Fogbugz. It was initially released in 2000.
God, don't I half remember it. I was a junior ASP dev at the time, for my sins. Java was hot like the sun and PHP was the default choice for the young and penniless. Perl was mainstream. Python and Ruby were new and rough (they were crap for webdev on shared hosts, with zero support by ISPs, but alpha geeks were already flocking to their ecosystems, Python in particular).
I'm sure part of the reasoning was that FogBugz did not start as a product -- the product back then was CityDesk, which was even more tied in the MS world -- but still, the "server scene" back then was already unix-y, which is why they were pretty soon forced to consider Linux support. I still think it was a shortsighted approach but hey, FogCreek is still alive 15 years later, so I guess it wasn't all that bad.
One of my first jobs ages ago was to convert a large Perl codebase to ASP 2.0 because my new boss, a 22 year old CTO, was replacing a guy more than twice his age and Perl was "for old folks" as he put it. This one person turned the whole company into a Microsoft dev shop with one decision simply because he didn't feel comfortable around Perl code.
COO at a previous company did this, went from Java to .Net shop. This was two years ago. Laid off most of the Java developers, brought in consultants. I had left, but there were some really intricate business processes in that code base, running on a 40 node jboss cluster. They embarked on a rewrite, which of course is taking longer than promised. All the Java developers who could, got jobs and left, now there are only two guys left who know how to deploy to the the cluster. And they use scripts, they don't understand anything they're doing. The competent people left long ago.
"In particular, we didn't want to have to tell them to get a Java Virtual Machine up and running, because that's not easy, and they're not really a hundred percent compatible."
He seems like an OK guy, but comments like that make me think he made a decision first and then made up reasons later. How hard can it be to make a simple installer that checks and installs Java, many applications do that.
>> How hard can it be to make a simple installer that checks and installs Java, many applications do that.
You'd be surprised how many applications don't do that too. There's a reason why a lot of people say "enterprise software sucks" -- it's usually because the software makers value new features over improving how things work.
It is more difficult than it looks, but then again support multiple Linux distributions with a simple Apache installer is more difficult than it looks as well.
I suspect that this is a function of overestimating the effort on the Java side and underestimating both the demand and the work on the non-Windows side.
They used to be an extremely Windows-centric company.
To a point. Headless stuff tends to work way better than the GUI stuff.
It's kind of the same tune with mobile software -- "Native feels and runs better than everything else". In the case of enterprise software GUIs, it's particularly true.
The difference this time is that C# is now open source. Sure, C# is still essentially Microsoft-only, but there are enough users that if Microsoft abandoned it, it's fairly certain that others would be there to pick up the pieces.
IIRC, Joel once wrote that he started coding FogBugz to learn some VBScript. Once you start a project in a language, there is a mighty amount of inertia to overcome to move to a different language...
I came away with a different impression to you: in the past there were good reasons to develop their own language; they recognised that those reasons no longer exist; they used modern tools to dig their way out from underneath the accumulated debt.
Nowadays we have open-source runtimes, .net running on multiple platforms, and componentised tools like Roslyn. It is easy to forget that the .net tooling from 10+ years ago was much more limited.
Writing you own language is an unusual approach, so descriptions of dealing with that kind of technical debt are rare. I thought this article was valuable and interesting.
> As a software engineer, you quickly start wondering how wise it is to spend years learning a language that will be of no use once you leave your current company
I wouldn't worry about this at all. The choice of language on project n has never once negatively affected my work on project n + 1. As a programmer, my job has a lot more to do with solving problems (in the generic sense) than it does the tools I'm using to solve them.
Business software that provides business value is good software. FogCreek has been shipping Fogbugz to paying customers for fifteen years. If that's not success, it will do until success shows up. If that's a colossal error, where do I get one?
> It's like the author of this article goes out of their ways to avoid saying that some people left the company, period. It also wouldn't surprise me if some of these defections were caused by Wasabi itself. As a software engineer, you quickly start wondering how wise it is to spend years learning a language that will be of no use once you leave your current company (yet another reason why rolling your own language as a critical part of your product is a terrible idea).
I also found that passage oddly worded. We get it, people don't stick around forever, you don't have to try and hide it like it's some dirty little secret. Also as developer I doubt I would have wanted anything to do with a closed-source internal-only poorly-documented language. You may learn some concepts that transfer but by and large you will have to start from scratch when you leave and you won't have skills people are looking for. Also if you do dive headfirst into Wasabi and love it and then leave you probably will be that annoying fuck at your new company that says shit like "Well in Wasabi this was easy...." or "Wasabi makes this problem non-existant because...." Shut up, no one cares. It's crazy to me to think of a company as small as Fog Creek would attempt something like this but to be fair I was born and learned to develop in a different environment than they did so maybe the tools and languages available back then really just couldn't cut it.
Spending time working on Wasabi is only damaging to your career because of the current fad in hiring software developers that says "5+ years of Go development required."
I work with at least 10 languages a week. There is zero chance of us hiring people with years of experience in all those. We want people who have used multiple languages, and someone who worked on compiling one language into another would completely satisfy that itch.
> It's always struck me as extremely bizarre that a company that regularly advertises that it's at the bleeding edge of software engineering practices (...) made such a colossal error as writing their own language
There are no other languages that could replace Rust for Mozilla due to their combination of performance and safety requirements. It is also the product of Mozilla's research arm, which is not directly focused on immediate productization and employs several people with PLT backgrounds full-time. Finally, Rust has always been an open-source projects and has developed a robust non-Mozilla community that contributes heavily to the compiler. None of these is true of Wasabi.
In any case, Spolsky still firmly believes that Netscape would have been better off today if they had continued work on the (completely unsalvagable) Netscape 4.x. Having extensively reviewed the history, Joel was dead wrong about this. There's no reason to assume his advice is more trustworthy than that of any other "thought leader," or that it is consistently followed within his own company.
Not very convincing. Maybe Patrick Naughton was in love with Objective C but there is little of that left in Java.
Depending on which angle you look at, you could see Java's interfaces as a direct successor of Objective C's protocols or a simplification of C++' multiple inheritance.
If you were to believe James Gosling, it's clearly the latter that influenced Java's design. The fact that Naughton liked Objective C hasn't had much impact on what Java ended up looking like in 1995.
Having programmed many, many lines of C++, Java and Objective-C, I'd have to agree. Maybe parts of Java were influenced by Objective-C, but surely there is very, very little left of the philosophy behind how you write Objective-C code. In fact, I'd say there's almost no similarity whatsoever.
In terms of how you program, Java is a lot more like C++, minus half the language features that make C++ both powerful and dangerous. Obviously there are fundamental differences, but I think it's pretty clear that someone switching from C++ to Java would have a much easier time adjusting, then going from C++ to Objective-C.
Also, I find the qualification of Objective-C as a 'mutant of C' a bit off the mark. Yes, Objective-C is a superset of C, it's built on exactly the same foundation, but the end result is and how you use it is completely different in almost everything except the syntax of the pure C-constructs and fundamental types.
I'd put that more around 1997-8. Java 1.0 and 1.1 definitely felt a lot more like programming objective-c both in terms of the features available and the style the community developed in. Somewhere in the Java 1.2-1.5 things changed considerably. I agree that by then it felt way more like C++ than Objective-C.
JP Morgan [1], Standard Chartered [2], Skedge.me [3], Soostone [4], and Facebook [5] have all advertised open Haskell positions recently. And those are just the ones that come to mind immediately.
> Can you please list a few of the biggest reasons why?
The main problem is the difficulty to hire Haskell programmers. Recruiting is crucial to a startup's success.
Then there's the entire ecosystem surrounding Haskell, which is still old and antiquated (looking at you Cabal) and moving very slowly (interfacing to NoSQL or AWS, etc...).
Finally, the very high bar to just learn how to correctly program in Haskell puts it out of reach of 99% of the developer community.
"The main problem is the difficulty to hire Haskell programmers."
That does not match my experience. I posted a Haskell position, promising to pay substantially below market rate with no hope of equity, and had a dozen or so applicants that seemed worthwhile (and a handful that really stood out, one of which turned into the hire).
"Then there's the entire ecosystem surrounding Haskell, which is still old and antiquated (looking at you Cabal)"
IME, pip is worse and Python seems a go-to choice for startups.
"Finally, the very high bar to just learn how to correctly program in Haskell puts it out of reach of 99% of the developer community."
Something like 99.9999% of the developer community isn't going to be working at your startup regardless.
The main problem is the difficulty to hire Haskell programmers. Recruiting is crucial to a startup's success.
"We were pleased to get a total of 42 applications, of which 19 merited serious consideration, and we eventually settled on a shortlist of 7 to interview. " [0]
Then there's the entire ecosystem surrounding Haskell, which is still old and antiquated (looking at you Cabal) and moving very slowly (interfacing to NoSQL or AWS, etc...).
I recently used the Haskell riak[3] binding in my day job for an application. The mongoDB[4] package was uploaded "Tue Jan 19 06:04:06 UTC 2010"[5]
> AWS
There has been an AWS binding since 2011[1][2].
> etc
cassandra[6][7][8]
couchdb[9]
> Finally, the very high bar to just learn how to correctly program in Haskell puts it out of reach of 99% of the developer community.
A 10 year old can do it[9], so I'm sure more than 5% of the developer community can as well!
I wouldn't say I'm a spectacular programmer and I have both learned Haskell and used it in the real world easily enough. I will admit in the past there was a learning issue, but actions to make Haskell easier to learn have (and continue) to happen.[11][12][13][14]
I'm looking forward to any rebuttals you may have to these! :)
I've seen startups killed by New Guy CTO Who Really Likes Java.
It's rarely a tool that kills a company, though. It's an attitude. It's not the tool's fault. If you use some Java (perhaps you need to be on the JVM, because you're doing Android development or because an external client wants Jars) but allow the better engineers to work in Clojure or Scala, you'll be fine. If you hire the New Guy CTO Who Really Likes Java and people are punished for using high-productivity languages, then you're fucked. (<whine>"How are we going to hire Clojure engineers?"</whine> "Train internal talent up to it, dumbass.") But you'd probably be fucked if you had that same guy and he was fixated on a different platform or language. It isn't Java's fault.
It's similar to the debate about Agile/Scrotum, in which defenders of these methodologies blame imperfect people for the messes that these processes often create. The Agile Manifesto is pretty reasonable, all considered. But the sorts of people who will impose Agile from the top down, as a justification for aggressive micromanagement and age discrimination (cf. the terminal juniority of a culture where even senior engineers submit to time tracking)... are also the sorts of people who will destroy your company if they ever get to make decisions that actually affect other people.
As for Haskell... I've never met anyone who regretted using it. Of about 20 cases where people put it into production, I've never heard of anyone regretting it. (I have heard regret stories about Scala, especially regarding the build chain circa 2011-13, though it may have improved.) There is one case where that company moved to F#, but it wasn't Haskell's fault so much as a standardization around the Microsoft stack.
Train an X programmer on Y, and you'll most likely get Y written as X for quiet some time, until they gain experience, if ever.
The disconnect I've always had with using Haskell for anything is; you never get a result until the entire computation is complete. Oh but monads they say. Well, monads seem an aweful lot like procedural code (to me, the uninitated, unpracticed) so what's the point again? That's not meant as a criticism of Haskell, but just one persons thought process on the issue. I'd use SML if I could.
Train an X programmer on Y, and you'll most likely get Y written as X for quiet some time, until they gain experience, if ever.
Most companies are run by short-sighted mediocrities who don't look more than 3 months into the future. I think longer term. In the long term, the transition is worth it. The benefits of moving from Java to Haskell or Clojure, say, are worth a few months of struggling and first-time code. If you have half-decent programmers and a culture that allows people to take time to learn how to do things right, that pays itself off within the year.
The disconnect I've always had with using Haskell for anything is; you never get a result until the entire computation is complete.
That's not really true. If you need strictness, you can force evaluation and you don't need to use monads. I'm writing a Haskell class and I cover `seq` before I tackle monads in all their glory (i.e. beyond "here's how you do some basic I/O").
Oh but monads they say. Well, monads seem an aweful lot like procedural code (to me, the uninitated, unpracticed) so what's the point again? That's not meant as a criticism of Haskell, but just one persons thought process on the issue.
That's a fair point. Yeah, do-notation is a way to write imperative-looking code in a functional way. The truth is that for small programs, imperative code is often much more readable. That's how recipes are written, for example. Five lines of imperative code never killed anyone. It's when a function gets to 4,000 lines (often with multiple authors) that imperative programming completely shits the bed.
Monads, in short, are... a way to abstract around the concept of computational context so we can define what "stateful effects" are. `IO` has access to anything the machine can do. `ST` is single-threaded but stateful. `STM` can update some state (`TVars`) but has to follow the transactional rules. `State s` is a pure monad that looks stateful because it uses explicit state. Then we get to the monad transformer stacks, which are... potentially very complicated. (Phil Freeman's work with row types and the `Eff` monad in PureScript may replace the MT stacks one day.)
The point, in short is: imperative code is not evil, we're not trying to do away with it. We want to segregate it into functions that are preferably easy to read so we can reason about what stateful effects do happen. In Haskell, the monad system allows us to enforce this at the type level. You can look at the type signature and get an upper bound (unless "cheat functions" like `unsafePerformIO` are used; and they generally shouldn't be) on the space of stateful effects that can occur.
> I've seen startups killed by New Guy CTO Who Really Likes Java.
Of course, that's simple statistics. Java is the dominant language out there so it's going to be the most used language by startups that fail. That's a pretty silly argument.
And it works both ways: most of the startups that succeeded are based on Java (and other dominant languages). Hardly any of those based their product on Haskell. And again, I'm sure you can find a handful of those, but we're talking about statistical anomalies here.
That's not the experience Skedge.me had. Rewriting their app in Haskell gave them an 80% code reduction, fixed all their major bugs that had been significantly impacting customer satisfaction, and the performance increases dramatically reduced their hosting costs. Their experience is described in more detail here:
I don't think you understand the benefits of type checkers. Here are some:
- Catching at compile time errors that would otherwise happen at runtime
- Performance
- Occasionally, it can enable making your code mathematically, provably correct
- Maintainability and flexibility in the sense that the code is easier to read by future hires
- And the most important advantage of all: automatic refactorings. Without that, the code base rots because developers are afraid to refactor since doing this without errors on a dynamically typed language requires a lot of tests, which nobody really has. Even renaming a function cannot be done safely in a dynamically typed language and it requires the oversight of a human
I know of at least 3 dynamically typed systems that can rename a method automatically in the whole system in one click. And move instVars to super and other tricky things like that before Eclipse was in the womb.
I'm talking do those refactorings in year 1999.
irc supports pretty much nothing besides text. With Slack, you can format, highlight code, upload pictures and documents, it supports Google Docs out of the box, there are plugins for Trello, git, and who knows what else. And it works flawlessly on all mobile OS (good luck getting a decent irc client running on iOS).
irc is a dinosaur compared to what recent team communication software like Slack and Hipchat can do.
Although I don't necessarily disagree with you, none of these things are particularly difficult to implement. In fact, most of it could be done very easily if you had a client that could render/inject markdown. Git and trello bots are very easy to write -- almost kata level scope.
Famous last words, but surely this is not hard given that there are 3rd party libraries for everything you need... I'm almost tempted...
Not really. Java was designed to be practical, and it ended up being mainstream.
Most languages want to become mainstream (including Scala), most of them just fail to achieve that goal for a variety of reasons, often due to their design decisions (but not always).