Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An interesting talk, and certainly entertaining, but I think it falls very short. Ultimately it turns into typical "architecture astronaut" naval gazing. He focuses on the shortcomings of "traditional programming" while at the same time imagining only the positive aspects of untried methods. To be honest, such an approach is frankly childish, and unhelpful. His closing line is a good one but it's also trite, and the advice he seems to give leading up to it (i.e. "let's use all these revolutionary ideas from the '60s and '70s and come up with even more revolutionary ideas") is not practical.

To pick one example: he derides programming via "text dump" and lauds the idea of "direct manipulations of data". However, there are many very strong arguments for using plain-text (read "The Pragmatic Programmer" for some very excellent defenses of such). Moreover, it's not as though binary formats and "direct manipulations" haven't been tried. They've been tried a great many times. And except for specific use cases they've been found to be a horrible way to program with a plethora of failed attempts.

Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang. The interesting thing about Erlang is that it is a fully fledged language today. It exists, it has a ton of support (because it's used in industry), and it's easy to install and use. And it also does what it's advertised to do: excel at concurrency. However, there aren't many practical projects, even ones that are highly concurrency dependent, that use Erlang. And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that? Is it because the programmers are afraid of changing their conceptions of "what it means to program"? Obviously not, they have already been using Erlang. Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.

Again, here we have an example of someone pushing ideas that seem to have a lot of merit in the abstract but in the real world meet with so much complexity and roadblocks that they prove to be unworkable most of the time.

It's a classic "worse is better" scenario. His insult of the use of markup languages on the web is a perfect example of his wrongheadedness. It took me a while to realize that it was an insult because in reality the use of "text dump" markup languages is one of the key enabling features of the web. It's a big reason why it's been able to become so successful, so widespread, so flexible, and so powerful so quickly. But by the same token, it's filled with plenty of ugliness and inelegance and is quite easy to deride.

It's funny how he mentions unix with some hints of how awesome it is, or will be, but ignores the fact that it's also a "worse is better" sort of system. It's based off a very primitive core idea, everything is a file, and very heavily reliant on "text dump" based programming and configuration. Unix can be quite easily, and accurately, derided as a heaping pile of text dumps in a simple file system. But that model turns out to be so amazingly flexible and robust that it creates a huge amount of potential, which has been realized today in a unix heritage OS, linux, that runs on everything from watches to smartphones to servers to routers and so on.

Victor highlights several ideas which he thinks should be at the core of how we advance the state of the art in the practice of programming (e.g. goal based programming, direct manipulations of data, concurrency, etc.) but I would say that those issues are far from the most important in programming today. I'd list things such as development velocity and end-product reliability as being far more important. And the best ways to achieve those things are not even on his list.

Most damningly, he falls into his own trap of being blind to what "programming" can mean. He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software. For all of Victor's examples of the willingly blind programmers of the 1960s who saw things like symbolic coding, object oriented design and so forth as "not programming" and more like clerical work he makes fundamentally the same error. Today testing, integration, building, refactoring and so on are all hugely fundamental aspects of prototyping and critically important to end-product quality as well as development velocity. And increasingly tooling is placing such things closer and closer to "the act of programming", and yet Victor himself still seems to be quite blind to the idea of these things as "programming". Though I don't think that will be the view among programmers a few decades down the road.



I see where you are coming from, but I think you're getting mired in some of the details of the talk that perhaps rub you the wrong way and are therefore missing the larger point. Brett in all his talks is saying the same thing: take an honest look at what we call programming and tell me that we've reached the pinnacle of where we can go with it.

Whether or not you like this specific talk or the examples he has chosen, I think you would probably agree there is a lot of room for improvement. Brett is trying to stir the pot and get some people to break out and try radical ideas.

Many of the things he talks about in this presentation have been tried and "failed" but that doesn't mean you never look at them again. Technology and times change in ways that can breathe life into early ideas that didn't pan out initially. Many forget that dynamic typing and garbage collection were once cute ideas but failures in practice.

He doesn't mention things like testing, integration, building, and refactoring because they are all symptoms of the bigger problem that he's been railing against: namely that our programs are so complex we are unable to easily understand them to build decent, reliable software in an efficient way. So we have all these structures in place to help us get through the complexity and fragility of all this stuff we create. Instead we should be focusing on the madness that causes our software to balloon to millions of lines of incomprehensible code.


Please forgive my liberties with science words. :)

The purpose of refactoring is to remove the entropy that builds up in a system, organization, or process as it ages, grows in complexity, and expands to meet demands it wasn't meant to handle. It's not a symptom of a problem; it's acknowledgement that we live in a universe where energy is limited and entropy increases, where anything we humans call a useful system is doomed to someday fall apart—and sooner, not later, if it isn't actively maintained.

Refactoring is fundamental. Failure to refactor is why nations fall to revolutions, why companies get slower, and why industries can be disrupted. More figuratively, a lack of maintenance is also why large stars explode as supernovas and why people die of age. And as a totally non-special case, it's why programs become giant balls of hair if we keep changing stuff and never clean up cruft.

A system where refactoring is not a built-in process is a system that will fail. Even if we automate it or we somehow hide it from the user, refactoring still has to be there.


What if programming consists of only refactoring? Then there is no separate "refactoring step", just programming and neglect. This is what Bret Victor is getting at. It is about finding the right medium to work in.


We have that already i.e. coding to a test. It sucks because you never seem to grasp the entirety of a program but instead just hack until every flag is green. It doesn't prevent entropy either. Only thing that prevents code entropy is careful and deliberate application of best practices when needed i.e. a shit ton of extreme effort.


he shows an example in his talk of inventing on principle where the test happen in real time as the code is written. It's pretty neat.


yes but it does seem like the equations of general relativity needs less frequent refactoring than a java codebase.


Sure, but I think he ends up missing the mark. Ultimately his talk boils down to "let's revolutionize programming!" But as I said that ends up being fairly trite.

As for testing, integration, building, and refactoring I think it's hugely mistaken to view them as "symptoms of a problem". They are tools. And they aren't just tools used to grapple with explosions of complexity, they are tools that very much help us keep complexity in check. To use an analogy, it's not as though these development tools are like a hugely powerful locomotive that can drag whatever sorry piece of crap codebase you have out into the world regardless of its faults. Instead, they are tools that enable and encourage building better codebases, more componentized, more reliable, more understandable, etc.

Continuous integration techniques combined with robust unit and integration testing encourage developers to reduce their dependencies and the complexity of their code down as much as possible. They also help facilitate refactoring which makes reduction of complexity easier. And they actively discourage fragility, either at a component level or at a product/service level.

Without these tools there is a break in the feedback loop. Coders would just do whatever the fuck they wanted and try to smush it all together and then they'd spend the second 90% of development time (having already spent the first) stomping on everything until it builds and runs and sort of works. With more modern development processes coders feel the pain of fragile code because it means their tests fail. They feel the pain of spaghetti dependencies because they break the build too often. And they feel that pain much closer to the point of the act that caused it, so that they can fix it and learn their lesson at a much lower cost and hopefully without as much disruption of the work of others.

With any luck these tools will be even better in the future and will make it even easier to produce high quality code closer to the level of irreducible complexity of the project than is possible today.

These aren't the only ways that programming will change for the better but they are examples which I think it's easy for people to write off as not core to the process of programming.


Hrmmm ... you seem to still be tremendously missing the overarching purpose of this speech. Obviously to delve proficiently into every aspect of programming through the ages and why things are the way they are now (e.g. what 'won' out and why) would require a Course and not a talk. You seem to think that the points brought up in this talk diminish the idea that things now DO work. However, what was stressed was avoiding the trap of discontinuing the pursuit of thinking outside the box.

I am also questioning how much of the video you actually paid attention too (note: I am not questioning how much you watched). I say this because your critique is focused on the topics that he covered in the earlier parts of the video and then (LOL) you quickly criticize him for talking about concurrency (in your previous comment)... I clearly remember him talking about programming on Massively Parallel architectures without the need for sequential logic control via multiplexing using threads and locks. I imagine, though, it is possible you did not critique this point because it is obvious (to everyone) that this is the ultimate direction of computing (synchronously with the end of Moore’s law as well).

Ahhh now that’s interesting, we are entering an era where there could possibly be a legitimate use to trying/conceiving new methods of programming? Who would have thought?

Maybe you just realized that you would have looked extremely foolish spending time on critiquing that point? IDK … excuse my ignorance.

Also you constantly argue FOR basic management techniques and methods (as if that countermoves Bret’s arguments) … but you fail to realize that spatial structuring of programs would be a visual management technique in itself that could THEN too have tools developed along with it that would be isomorphic to modern integration, testing management. But I won’t bother delving into that subject as I am much more ignorant on this and more importantly … I would hate to upset you, Master.

Oh and btw (before accusations fly) I am not a Hero worshiper … this is the first time I have ever even heard of Bret Victor. Please don’t gasp too loud.


I definetely get what OP was trying to say. Bret presented something that sounds a lot like the future even though it definitely isn't. Some of listed alternatives like direct data manipulation, or visual languages or non-text languages have MAJOR deficiences and stumbling blocks that prevented them from achieving dominance. Though in some cases it basically does boil down to which is cheaper, more familiar.


I think the title of Bret's presentation was meant to be ironic. I think he meant something like this.

If you want to see the future of computing just look at all the things in computing's past that we've "forgotten" or "written off." Maybe we should look at some of those ideas we've dismissed, those ideas that we've decided "have MAJOR deficiencies and stumbling blocks", and write them back in?

The times have changed. Our devices are faster, denser, and cheaper now. Maybe let's go revisit the past and see what we wrote off because our devices then were too slow, too sparse, or too expensive. We shouldn't be so arrogant as to think that we can see clearer or farther than the people who came before.

That's a theme I see in many of Bret's talks. I spend my days thinking about programming education and I can relate. The art when it comes to programming education today is a not even close to the ideas described in Seymour Papert's Mindstorms, which he wrote in 1980.

LOGO had its failings but at least it was visionary. What are MOOCs doing to push the state of the art, really? Not that it's their job to push the state of the art -- but somebody should be!

This is consistent with other thing's he's written. For example, read A Brief Rant on the Future of Interaction Design (http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...). Not only does he use the same word in his title ("future"), but he makes similar points and relates the future to the past in a similar way.

"And yes, the fruits of this research are still crude, rudimentary, and sometimes kind of dubious. But look —

In 1968 — three years before the invention of the microprocessor — Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels — an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

[picture of a device sketch that looks essentially identical to an iPad]

And then he chased that carrot through decades of groundbreaking research, much of which is responsible for the hardware and software that you're currently reading this with.

That's the kind of ambitious, long-range vision I'm talking about. Pictures Under Glass is old news. Let's start using our hands."


Okay. Single fingers are a amazing input device because of dexterity. Flat phones are amazing because they fit in my pockets. Text search is amazing because with 26 symbols I can query a very significant portion of world knowledge (I can't search for, say, a painting that looks like a Van Gogh by some other painter, so there are limits, obviously).

Maybe it is just a tone thing. Alan Kay did something notable - he drew the iPad, he didn't run around saying "somebody should invent something based on this thing I saw".

Flat works, and so do fingers. If you are going to denigrate design based on that, well, let's see the alternative that is superior. I'd love to live through a Xerox kind of revolution again.


Go watch these three talks of his: Inventing on Principle, Dynamic Drawings, and this keynote on YouTube (http://m.youtube.com/watch?v=-QJytzcd7Wo&desktop_uri=%2Fwatc...)

Next, read the following essays of his: Explorable Explanations, Learnable Programming, and Up and Down The Ladder of Abstractions

Do you still think he's all talk?

Also, I can't tell if you were implying otherwise, butAlan Kay did a few other notable things like Smalltalk and OOP.


I've seen some of his stuff. I am reacting to a talk where all he says is "this is wrong". I've written about some of that stuff in other posts here, so I won't duplicate it. He by and large argues to throw math away, and shows toy examples where he scrubs a hard coded constant to change program behavior. Almost nothing I do depends on something so tiny that I could scrub to alter my algorithms.

Alan Kay is awesome. He did change things for the better, I'm sorry if you thought I meant otherwise . His iPad sketch was of something that had immediately obvious value. A scrubbing calculator? Not so much.


Hmm didn't you completely miss his look, the projector etc? He wasn't pretending to stand in 2013 and talk about the future of programming. He went back in time and talked about the four major trends that excisted back then.

The future in that talk means "today"


No. I'm saying, there is a reason those things haven't' become reality. They have much greater hidden cost than presented. It is the eqivalent of someone dressing in 20th century robe of Edisson and crying over the cruel fate that fell on DC. Much like DC these ideas might see a comeback but only because the context has changed. Not being aware of history is one blunder but failing to see why those thing weren't realized is another blunder.


I get it, I really do. And I'm very sympathetic to Victor's goals. I just don't buy it, I think he's mistaken about the most important factors to unlock innovation in programming.

His central conceit is the idea is that various revolutionary computing concepts which first surfaced in the early days of programming (the 1960s and '70s) have since been abandoned in favor of boring workaday tools of much more limited potential. More so that new, revolutionary concepts in programming haven't received attention because programmers have become too narrow minded. And that is very simply fundamentally an untrue characterization of reality.

Sure, let's look at concurrency, one of his examples. He bemoans the parallelization model of sequential programming with threads and locks as being excessively complex and inherently self-limited. And he's absolutely correct, it's a horrible method of parallelism. But it's not as though people aren't aware of that, or as though people haven't been actively developing alternate, highly innovative ways to tackle the problem every year since the 1970s. Look at Haskell, OCaml,vector computing, CUDA/GPU coding, or node.js. Or Scala, Erlang, or Rust, all three of which implement the touted revolutionary "actor model" that Victor brandishes.

Or look at direct data manipulations as "programming". This hasn't been ignored, it's been actively worked on in every way imaginable. CASE programming received a lot of attention, and still does. Various workflow based programming models have received just as much attention. What about Flash? Hypercard? Etc. And there are many niche uses where direct data manipulation has proven to be highly useful. But again and again it's proven to be basically incompatible with general purpose programming, likely because of a fundamental impedance mismatch. A total of billions of dollars in investment has gone into these technologies, it's counterfactual to put forward the notion that we are blind to alternatives or that we haven't tried.

Or look at his example of the smalltalk browser. How can any modern coder look at that and not laugh. Any modern IDE like Eclipse or Visual Studio can present to the developer exactly that interface.

Again and again it looks either like Victor is either blindly ignorant of the practice of programming in the real-world or he is simply adhering to the "no true Scottsman" fallacy. Imagining that the ideas he brings up haven't "truly" been tried, not seriously and honestly, they've just been toyed with and abandoned. Except that in some cases, such as the actor model, they have not just been tried they've been developed into robust solutions and they are made use of in industry when and if they are warranted. It's hilarious that we're even having this discussion on a forum written in Arc, of all things.

To circle back to the particular examples I gave of alternative important advances in programming (focusing on development velocity and reliability), I find it amusing and ironic that some folks so easily dismiss these ideas because they are so seemingly mundane. But they are mundane in precisely the ways that structured programming was mundane when it was in its infancy. It was easy to write off structured programming as nothing more than clerical work preparatory to actual programming, but now we know that not to be true. It's also quite easy to write off testing and integration, as examples, as extraneous supporting work that falls outside "real programming". However, I believe that when the tooling of programming advances to more intimately embrace these things we'll see an unprecedented explosion in programming innovation and productivity, to a degree where people used to relying on such tools will look on our programming today as just as primitive as folks using pre-structured programming appear to us today.

Certainly a lot of programmers today have their heads down, because they're concentrated on the work immediately in front of them. But the idea that programming as a whole is trapped inside some sort of "box" which it is incapable of contemplating the outside of is utterly wrong with numerous examples of substantial and fundamental innovation happening all the time.

I think Victor is annoyed that the perfect ideal of those concepts he mentions haven't magically achieved reification without accumulating the necessary complexity and kruft that comes with translating abstract ideas into practical realities. And I think he's annoyed that fundamentally flawed and imperfect ideas, such as the x86 architecture, continue to survive and be immanently practical solutions decade after decade after decade.

It turns out that the real world doesn't give a crap about our aesthetic sensibilities, sometimes the best solution isn't always elegant. To people who refuse to poke their head out of the elegance box the world will always seem as though it turned its back on perfection.


> I get it, I really do.

It's always a red flag when people have to say that. Many experts don't profess to understand something which they spent a long time understanding.

Ironically, Bret Victor mentioned, "The most dangerous thought that you can have as a creative person is to think that you know what you're doing..."

The points you mention are bewildering, since in my universe, most "technologists" ironically hate change. And learning new things. They seem to perceive potentially better ways of doing things like a particularly offensive veggie, rant at length rather than even simply taste the damn thing, and at best hide behind "Well it'd be great to try these new things, but we have a deadline now!" Knowing that managers fall for this line each time, due to the pattern-matching they're trained in.

(Of course, when they fail to meet these deadlines due to program complexity, they do not reconsider their assumptions. Their excuses are every bit as incremental as their approach to tech. The books they read — if they read at all — tell them to do X, so by god X should work, unless we simply didn't do enough X.)

It's not enough to reject concrete new technologies. They even fight learning about them in order to apply vague lessons into their solutions.

Fortunately, HN provides a good illustration of Bret Victor's point: "There can be a lot of resistance to new ways of working that require to kind of unlearn what you've already learned, and think in new ways. And there can even be outright hostility." In real life, I've actually seen people shout and nearly come to blows while resisting learning a new thing.


You haven't addressed any of inclinedPlane's criticism of Brett's talk. Rather your entire comment seems to be variations on "There are people who irrationally dislike new technology."


Well, I don't agree with your premise, that I haven't addressed any of their criticisms.

A main theme underlying their complaint is that there's "numerous examples of substantial and fundamental innovation happening all the time."

But Bret Victor clearly knows this. Obviously, he does not think every-single-person-in-the-world has failed to pursue other computational models. The question is, how does the mainstream programming culture react to them? With hostility? Aggressive ignorance? Is it politically hard for you to use these ideas at work, even when they appear to provide natural solutions?

Do we live in a programming culture where people choose the technologies they do, after an openminded survey of different models? Does someone critique the complectedness of the actor model, when explaining why they decided to use PHP or Python? Do they justify the von Neumann paradigm, using the Connection Machine as a negative case study?

There are other shaky points on these HN threads. For instance, inferring that visual programming languages were debunked, based on a few instances. (Particularly when the poster doesn't, say, evaluate what was wrong with the instances they have in mind, nor wonder if they really have exhausted the space of potential visual languages.)


@cali: I completely agree with your points. @InclinedPlane is missing the main argument.

Here is my take: TLDR: Computing needs an existential crisis before current programming zeitgeist is replaced. Until then, we need to encourage as many people as possible to live on the bleeding edge of "Programming" epistemology.

Long Version: For better or for worse, humans are pragmatic. Fundamentally, we don't change our behavior until there is a fire at our front door. In this same sense, I don't think we are going to rewrite the book on what it means to "program," until we reach an existential peril. Intel demonstrated this by switching to multicore processors after realizing Moore's law could simply not continue via a simple increase in clock speed.

You can't take one of Bret's talks as his entire critique. This talk is part of a body of work in which he points out and demonstrates our lack of imagination. Bret himself points out another seemingly irrelevant historical anecdote to explain his work: Arab Numerals. From Bret himself:

"Have you ever tried multiplying roman numerals? It’s incredibly, ridiculously difficult. That’s why, before the 14th century, everyone thought that multiplication was an incredibly difficult concept, and only for the mathematical elite. Then arabic numerals came along, with their nice place values, and we discovered that even seven-year-olds can handle multiplication just fine. There was nothing difficult about the concept of multiplication—the problem was that numbers, at the time, had a bad user interface."

Interestingly enough, the "bad user interface" wasn't enough to dethrone roman numerals until the renaissance. The PRAGMATIC reason we abandoned roman numerals was due to the increased trading in the Mediterranean.

Personally, I believe that Brett is providing the foundation for the next level of abstraction that computing will experience. That's a big deal. Godspeed.


Perhaps. But I think he is a visual thinker (his website is littered with phrases like "the programmer needs to see....". And that is a powerful component of thinking, to be sure. But, think about math. Plots and charts are sometimes extremely useful, and we can throw them up and interact with them in real time with tools like Mathcad. Its great. But, it only goes so far. I have to do math (filtering, calculus, signal processing) most every day at work. I have some Python scripts to visualize some stuff, but by and large I work symbolically because that is the abstraction that gives me the most leverage. Sure, I can take a continuous function that is plotted, and visually see the integral and derivative, and that can be a very useful thing. OTOH, if I want to design a filter, I need to design it with criteria in mind, solve equations and so on, not put an equation in a tool like mathcad, and tweek coefficients and terms until it looks right. Visual processing falls down for something like that.

Others have posted about the new IDEs that they are trying to create. Great! Bring them to us. If they work, we will use them. But I fundamentally disagree with the premise that visual is just flat out better. Absolutely, have the conversation, and push the boundaries. But to claim that people that say "you know, symbolic math actually works better in most cases" are resisting change (you didn't say that so much as others) is silly. We are just stating facts.

Take your arabic numbers example. Roman numerals are what, essentially VISUAL!! III is 3. It's a horrible way to do arithmetic. Or imagine a 'visual calculator', where you try to multiply 3*7 by stacking blocks or something. Just the kind of thing I might use to teach a third grader, but never, ever, something I am going to use to balance my checkbook or compute loads on bridge trusses. I'm imagining sliders to change the x and y, and the blocks rearranging themselves. Great teaching tool. A terrible way to do math because it is a very striking, but also very weak abstraction.

Take bridge trusses. Imagine a visual program that shows loads in colors - high forces are red, perhaps. A great tool, obviously. (we have such things, btw). But to design a bridge that way? Never. There is no intellectual scaffolding there (pun intended). I can make arbitrary configurations, look at how colors change and such, but engineering is multidimensional. What do the materials cost? How hard are they to get and transport? How many people will be needed to bolt this strut? How do the materials work in compression vs expansion? What are the effects of weather and age? What are the resonances. It's a huge optimization problem that I'm not going to solve visually (though, again, visual will often help me conceptualize a specific element). That I am not thinking or working purely visually is not evidence that I am not being "creative" - I'm just choosing the correct abstraction for the job. Sometimes that is visual, sometimes not.

So, okay, the claim is that perhaps visual will/should be the next major abstraction in programming. I am skeptical, for all the reasons above - my current non-visual tools provide me a better abstraction in so many cases. Prove me wrong, and I will happily use your tool. But please don't claim these things haven't been thought of, or that we are being reactionary by pointing out the reasons we choose symbolic and textual abstractions over visual ones when we have the choice (I admit sometimes the choice isn't there).


Bret has previously given a talk[1] that addresses this point. He discusses the importance of using symbolic, visual, and interactive methods to understand and design systems. [2] He specifically shows an example of digital filter design that uses all three. [3]

Programming is very focused on symbolic reasoning right now, so it makes sense for him to focus on visual and interactive representations and interactive models often are intertwined with visual representation. His focus on a balanced approach to programming seems like a constant harping on visualization because of this. I think he is trying to get the feedback loop between creator and creation as tight as possible and using all available means to represent that system.

The prototypes I have seen of his that are direct programming tend not to look like LabView, instead they are augmented IDEs that have visual representations of processing that are linked to the symbolic representations that were used to create them. [4] This way you can manipulate the output and see how the system changes, see how the linkages in the system relate, and change the symbols to get different output. it is a tool for making systems represented by symbols but interacting with the system can come through a visual or symbolic representation.

[1] http://worrydream.com/MediaForThinkingTheUnthinkable/note.ht...

[2] http://vimeo.com/67076984#t=12m51s

[3] http://vimeo.com/67076984#t=16m55s

[4] http://vimeo.com/67076984#t=25m0s


Part of Bret's theory of learning (which I agree with) is that when "illustrating" or "explaining" an idea it is to important use multiple simultaneous representations, not solely symbolic and not solely visual. This increases the "surface area" of comprehension so that a learner is much more likely to find something in this constellation of representations that relate to their prior understanding. In fact, that comprehension might only come out of seeing the constellation. No representation alone would have sufficed.

Further, you then want to build a feedback loop by allowing direct manipulation of any of the varied representations and have the other representations change accordingly. This not only lets you see the same idea from multiple perspectives -- visual, symbolic, etc. -- but lets the learner see the ideas in motion.

This is where the "real time" stuff comes in and also why he gets annoyed when people see the it as the point of his work. It's not; it's just a technology to accelerate the learning process. It's a very compelling technology, but it's not the foundation of his work. This is like reducing Galileo to a really good telescope engineer -- not that Bret Victor is Galileo.

I think he emphasizes the visual only because it's so underdeveloped relative to symbolic. He thinks we need better metaphors, not just better symbols or syntax. He's not an advocate of working "purely visually." It's the relationship between the representations that matters. You want to create a world where you can freely use the right metaphor for the job, so to speak.

That's his mission. It's the mission of every constructivist interested in using computers for education. Bret is really good at pushing the state of the art which is why folks like me get really excited about him! :D

You might not think Bret's talks are about education or learning, but virtually every one is. A huge theme of his work is this question: "If people learn via a continual feedback loop with their environment -- in programming we sometimes call this 'debugging' -- then what are our programming environments teaching us? Are they good teachers? Are they (unknowingly) teaching us bad lessons? Can we make them better teachers?"


The thing is that computing has both reached the limits of where "text dump" programming can go AND has found the text-dump programming is something like a "local maximum" among the different clear options available to programmers.

It seems like we need something different. But the underlying problem might be that our intuitions about "what's better" don't seem to work. Perhaps an even wider range of ideas needs to be considered and not simply the alternatives that seem intuitively appealing (but which have failed compared to the now-standard approach).


I agree with this. To get out of this local trap we are going to need something revolutionary. This is not something you can plow money into, it will come, if indeed it ever comes, from left field. My bet is there is a new 'frame' to be found somewhere out in the land of mathematical abstraction. I think to solve this one we are going to have to get right down to the nitty gritty, where does complexity come from, how specifically does structure emerge from non structure? How can we design such systems?


It's true you couldn't plow money into such a project. But I always wondered why, when confronted with a problem like this, you couldn't hire one smart organizers who hire forty dispersed teams who'd each follow a different lead. And hire another ten teams who'd be tasked with following and integrating the work of the forty teams (numbers arbitrary but you get the picture).

I suppose that's how grants are supposed to work already but it seems these mostly degenerated to all following the intellectual trend with the most currency.


> it turns into typical "architecture astronaut" naval gazing

I take exception to your critique of your Mr Victor's presentation. I am sad to see that your wall of text has reached the top of this discussion on HN. To be honest, it's probably because no one has the time to wade through all of the logical fallacies, especially the ad hominem attacks and needlessly inflammatory language ("falls very short," "architecture astronaut naval gazing," "untried methods," "frankly childish, and unhelpful,"trite," "not practical," etc)

You seem to be reacting just like the "absolute binary programmers" that Bret predicts. As far as I can gather, you are fond of existing web programming tools (HTML, CSS, JS, etc) and took Bret's criticism as some sort of personal insult (I guess you like making websites).

I think that Bret's talk is about freeing your mind from thinking that the status quo of programming methodologies is the final say on the matter, and he points out that alternative methodologies (especially more human-centric and visual methodologies) are a neglected research area that was once more fruitful in Computer Science's formative years.

Bret's observations in this particular presentation are valid and insightful in their own right. His presentation style is also creative and enjoyable. Nothing in this presentation deserves the type of language that you invoke, especially in light of the rest Bret's recent works (http://worrydream.com/) that are neatly summed up by this latest presentation.


I'm not surprised at the language; it's war, after all. Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.

And what makes it hard to hear is that we know deep in our hearts, that's it's true, and as an industry, we're not really trying all that hard. It used to be Computer Science; now it's Computer Pop.


Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.

It sounds like sour grapes to me. Everyone else is pathetic and unprofessional because they didn't fall in love with our language and practices.


Indeed, they didn't. And it likely cost the world trillions (I'm being conservative, here). The sour grapes are justified here. To give a few examples:

In the sixties, people were able to build interactive systems with virtually no delay. Nowadays we have computers that are millions times faster, yet still lag. Seriously, more than 30 seconds just to turn on the damn computer? My father's Atari ST wast faster than my brand new computer in this respect.

Right now, we use the wrong programming languages for many projects, often multiplying code size by at least 2 to 5. I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are.

X86 still dominates the desktop.


>In the sixties, people were able to build interactive systems with virtually no delay.

That did virtually nothing. It is easy to be fast when you do nothing.

>I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are. X86 still dominates the desktop.

Wow, so CS is all about what hardware you buy and what languages you program in? I guess we will just have to agree to disagree on what CS is. While programming languages are part of CS, what language you chose to write an app in really is not.


> > In the sixties, people were able to build interactive systems with virtually no delay.

> That did virtually nothing. It is easy to be fast when you do nothing.

This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)

> Wow, so CS is all about what hardware you buy and what languages you program in?

No. Computer Science is about assessing the qualities of current programming tools, and inventing better ones. Without forgetting humans warts and limitations of course.

On the other hand, programming (solving problems with computers), is about choosing hardware and languages (among other things). You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.


You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.

Yep, If there were really tools out there that could beat what is in current use by a factor of 5 then they would have won, and once they exist they will win. Because they would have had the time to A) Implement something better. B) Use all that extra time to build an easy migration path so that those on the lesser platform could migrate over.

So where is the processor that is 5x better than x86? Where is the language that is 5x better than C, C++, Java, C#,(whatever you consider to be the best of the worse to be.) I would love to use a truly better tool, I would love to use a processor so blazingly fast that it singed my eyebrows.

This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)

Right because all of us sitting around with our 1/5x tools have time to bang out imperceptible features.


Thanks for saying all that. I was thinking it, but restrained myself since there seemed to be a lot of hero worship over this person going on here. But it needs to be said. Everything in that video is of stuff that has been researched for decades. It isn't mainstream largely because it is facile to say 'declarative programming' or what have you, but something entirely different for it to be easier and better. Prolog is still around. Go download a free compiler, and try to write a 3D graphical loop that give you 60 fps. Try to write some seismic code with it. Try to write web browser. Not so easy. Much was promised by things like Prolog, declarative programming, logic programming, expert systems, and so on, but again it is easy to promise, hard to deliver. We didn't give up, or forget the ideas, it is just that the payoff wasn't there (except in niche areas where in fact all of these things are going strong, as you would expect).

Graphical programming doesn't work because programs are not 2 dimensional, they are N dimensional, and you spend all your time trying to fit things on a screen in a way that doesn't look like a tangled ball of yarn (hint, can't be done). I've gone through several CASE tools through my decades, and they all stink. Not to mention, I don't really think visually, but more 'structurally' - in terms of the interrelations of things. You can't capture that in 2D, and the problems that 2D create more than overwhelm whatever advantages you might get going from 1D (text files) to 2D.

Things like CSP have never been lost, though they were niche for awhile. Look at Ada's rendevous model, for example.


Right. Personally I've had plenty of experience with certain examples of "declarative programming" and "direct manipulation of data" programming and other than a few fairly niche use cases they are typically horrid for general purpose programming. Think about how "direct manipulation" programming fits into a source control / branching workflow, for example. Unless there's a text intermediary that is extremely human friendly you have a nightmare on your hands. And if there is such an intermediary then you're almost always better off just "directly manipulating" that.


> Think about how "direct manipulation" programming fits into a source control / branching workflow, for example.

Think about how "automobiles" fit into a horse breeding / grooming workflow, for example.


No reason that source-control for a visual programming language couldn't be visual.


Think about how "direct manipulation" programming fits into a source control / branching workflow, for example.

Trivially. Since virtually all currently used languages form syntactic trees (the exception being such beasts as Forth, Postscript etc.), you could use persistent data structures (which are trees again) for programs in these languages. Serializing the persistent data structure in a log-like fashion would be equivalent to working with a Git repository, only on a more fine-grained level. Essentially, this would also unify the notion of in-editor undo/redo and commit-based versioning; there would be no difference between the two at all. You'd simply tag the whole thing every now and then whenever you reach a development milestone.


> you spend all your time trying to fit things on a screen in a way that doesn't look like a tangled ball of yarn (hint, can't be done)

The way we code now leads to tangled balls of yarn. That won't be fixed by simply moving to a graphical (visual) programming language.


Well, there is yarn and then there is yarn. I don't mean spaghetti code, which is its own problem separate from representation. I'm thinking about interconnection of components, which is fine. Every layer of linux, say, makes calls to the same low level functions. If you tried to draw that it would be unreadable, but it is perfectly fine code - it is okay for everyone to call sqrt (say) because sqrt has no side effects. Well, sqrt is silly, but I don't know the kernel architecture - replace that with virtual memory functions or whatever makes sense.


> If you tried to draw that it would be unreadable, but it is perfectly fine code

You seem to be implying that no one could figure out how to apply the "7 +/- 2 rule"(https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...) to a visual programming language.


I have actually been thinking about 1D coding vs 2D coding. Isn't 2D describing nD a little bit closer? Like a photograph of a sculpture... a little easier to get the concept than with written description, no matter how eloquent.

Re: the ball of yarn, we're trying to design that better in NoFlo's UI. Think about a subway map that designs itself around your focus. Zoom out to see the whole system, in to see the 1D code.


All I can say is, have you tried to use a CASE tool to do actual coding? I have, forced on me by various MIL-STD compliant projects.

X and Y both talk to A and B. Represent that in 2D without crossing lines.

Okay, you can, sure. If X and Y are at the top, and A and B are at the bottom, Twist A and Y, and the interconnection x in the middle goes away. But, you know, X is related to Y (same level in the sw stack), and I really wanted to represent them at the same level. Opps.

And, I'm sure you can see that all it takes is one additional complication, and you are at a point where you have crossed lines no matter what.

Textually there is no worry about layout, graphically, there is. I've seen engineers spend days and weeks just trying to get boxes lined up, moving things around endlessly as requirements change - you just spend an inordinate amount of time doing everything but engineering. You are drawing, and trying to make a pretty picture. And, that is not exactly wasted time. We all know people spend too much effort making PowerPoint 'pretty', and I am not talking about that. I mean that if the image is not readable then it is not usable, so you have to do protracted layout sessions.

Layout is NP-hard. Don't make me do layout to write code.

tl;dr version - code is multi-dimensional, but not in a 'layout' way. If you force me to do 2D layout you force me to work in an unnatural way that is unrelated to what I am actually trying to do. You haven't relaxed the problem by 1 dimension by introducing layout, but multiplied the constraints like crazy (that's a technical math term, I think!)

And then there is the information compression problem. Realistically how much can you display on a screen graphically. I argue far less than textually. I already do everything I can to maximize what I can see - scrolling involves a context switch I do not want to do. So, in {} languages I put the { on the same line as the expression "if(){" to save a line, and so on. Try a graphical UML display of a single class - you can generally only fit a few methods in, good luck with private data, and all bets are off if methods are more than 1-2 short words long. I love UML for a one time, high level view of an architecture, but for actually working in? Horrible, horrible, horrible. For example, I have a ton of tiny classes that do just 1 thing that get used everywhere. Do I represent that exactly once, and then everywhere else you have to remember that diagram? Do I copy it everywhere, and face editing hell if I change something? Do I have to reposition everything if I make a method name longer? Do I let the tool do the layout, and give me an unreadable mess? And so on. The bottom line is you comprehend better if you can see it all on one "page" - and graphical programming has always meant less information on that page. That's a net loss in my book. (This was very hand-wavey; I've conflated class diagrams with graphical programming for exmaple - we'd both have to have access to a whiteboard to really sketch out all of the various issues).

Views into 1D code is a different issue, which is what I think you are talking about with NoFlo (I've never seen it). If you can solve the layout problem you will be my hero, perhaps, so long as I can retain the textual representation that makes things like git, awk, sed, and so on so powerful. But I ask what is that going to buy me opposed to a typical IDE with solutions/projects/folders/files on a tab, a class browser in another tab, auto-complete and easy navigation (ctrl+right click to go to definition, and so on)? Can I 'grep' all occurrences of a word (I may want to grep comments, this is not strictly a code search)?

Hope this all doesn't come across as shooting you down or bickering, but I am passionate about this stuff, and I am guessing you are also. I've been promised the wonders of the next graphical revolution since the days of structured design, and to my way of thinking none of it has panned out. Not because of the resistance or stupidity of the unwashed masses, but because what we are doing does not inherently fit into 2D layout. There's a huge impedance mismatch between the two which I assert (without proof) will never be fixed. Prove me wrong! (I say that nicely, with a smile)

Sorry for the length; I didn't have time to make it shorter.


I write all of my software in a 2D, interactive, live-executing environment. Yes, layout is a problem. But you get good at it, and then it's not a problem anymore.

Moreover, the UI for the system I use is pretty basic and only has a few layout aids – align objects, straighten or auto-route patch cords, auto-distribute, etc. I can easily imagine a more advanced system that would solve most layout problems.

A 2D editor with all of the power or vim or emacs would be formidable. Your bad experience with "CASE tools" does not prove the rule.


What environment?


>Sorry for the length; I didn't have time to make it shorter. favorite phrase

let me try the tl;dr

assembler over machine won as well as it lost to the next high level thing because on practical terms it was easier and more practical, reality decided based on constraints..

if it doesn't get mainstream it means it's not worth it because it's more expensive...


Yes, with Pure Data and Quartz composer.

As a JS hacker I wanted to bring that kind of coding to the browser for kids so I made http://meemoo.org/ as my thesis. Now I have linked up with http://noflojs.org/ to bring the concept to more general purpose JS, Node and browser.

I won't have really convinced myself until I rewrite the graph editor with the graph editor. Working on that now.


bingo, but it can also goes the other way back, the photograph example:

how much time do you need by tangling lines (or any other method you can come with) to define all the level of detail you are looking?

now, "no matter how eloquent" if the photo can be made digital it can be saved to file and it can be described with a rather simple language, all 0 and 1, so it can be done, and methods for being that eloquent exist...

what if the programs written on text actually are a representation of some more complex ideas? (IMO that's what they are, code is just the way of ... coding those ideas to text...) and text is visual remember... (same abstraction for words and the ideas they represent)


> I'd list things such as development velocity and end-product reliability as being far more important.

Your main thesis is that software and computing should be optimized to ship products to consumers.

The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.

Deep down most of us got in to computing because it is a fantastic way to manipulate our world.

Bret Victor's talks instill a sense of wonderment and discovery, something that has often been brow-beaten out of most of us working stiffs. The talks make us feel like there is more to our profession than just commerce. And you know what? There is. And you've forgotten that to the point where you're actually rallying against it!

Come back to the light, fine sir!


> Your main thesis is that software and computing should be optimized to ship products to consumers.

Those were just examples of other things I thought were more important, it wasn't an exhaustive list. However, it's interesting that you focus in on "optimizing to ship products to consumers", when I made mention of no such thing. I mentioned development velocity and end-product reliability. These are things that are important to the process of software development regardless of the scale of the project or the team working on it or the financial implications of the project.

They are tools. Tools for making things. They enable both faceless corporations who want to make filthy lucre by shipping boring line-of-business apps and individuals who want to "expand human potential" or "instill a sense of wonderment and discovery".

Reliability and robustness are very fundamental aspects to all software, no matter how it's built. And tools such as automated builds combined with unit and integration tests have proven to be immensely powerful in facilitating the creation of reliable software.

If your point is that non-commercial software need not take advantage of testing or productivity tools because producing a finished product that runs reliably is unimportant if you are merely trying to "expand human potential" or what-have-you then I reject that premise entirely.

If you refuse to acknowledge that the tools of the trade in the corporate world represent a fundamentally important contribution to the act of programming then you are guilty of the same willful blindness that Bret Victor derides so heartily in his talk.


You now, in some sense those early visionaries were beaten by the disruptive innovators of their day.

I think the argument here is that 1000 little choices favoring incremental advantage in the short term add up to a sub-optimal long term, but I'm not so sure. I have a *NIX machine in my phone. Designers "threw it in there" as the easy path. And it works.


c'mon, "designers threw it in there"? don't you think it was a hard thought choice by skilled engineers?


Just trying to show the Linux kernel as an inexpensive building block in this day and age. One that is used casually, in Raspbery Pi's, in virtualization, etc.


Nope, Android is based on Linux because it was available and relatively easy to get going quickly.


>> I'd list things such as development velocity and end-product reliability as being far more important.

Your main thesis is that software and computing should be optimized to ship products to consumers.

No, the main thesis is that should be optimized to solve problems and to try to adjust it as easily as it could..

>The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.

we are, even with our current tools, now you have the opportunity to express yourself to a the world in this place, everything done with these limiting tools..., it's IMO the presentation about exploring if maybe there is a better approach...., quotes on maybe

>Come back to the light, fine sir! All are lights... is just the adequate combination required... you don't put the ultra bright leds of your vehicle in your living room or viceversa ...


Brilliant analysis! Navel gazing indeed. Typical NCA (Non-Coding Architect) stuff.

This reminds me of the UML and the Model-Driven Architecture movement of the days before, where architect astronauts imagined a happy little world where you could just get away from that dirty coding, join some boxes with lines in all sorts of charts and then have that generate your code. And it will produce code you actually want to ship and that does what you want to do.

This disdain for writing code is not new. This classic essay about "code as design" from 1992 (!) is still relevant today:

http://www.developerdotstar.com/mag/articles/reeves_original...


In the presenter's worldview it seems as though a lot of subtle details are ignored or just not seen, whereas in reality seemingly subtle details can sometimes be hugely important. Consider Ruby vs Python, for example. From a 10,000 foot view they almost look like the same language, but at a practical level they are very different. And a lot of that comes down to the details. There are dozens of new languages within the last few decades or so that share almost all of the same grab bag of features in a broad sense but where the rubber meets the road end up being very different languages with very different strengths. Consider, for example, C# vs Go vs Rust vs Coffeescript vs Lua. They are all hugely different languages but they are also very closely related languages.

I suspect that the killer programming medium of 2050 isn't going to be some transformatively different methodology for programming that is unrecognizable to us, it's going to be something with a lot of similarities to things I've listed above but with a different set of design choices and tradeoffs, with a more well put together underlying structure and tooling, and likely with a few new ways of doing old things thrown in and placed closer to the core than we're used to today (my guess would be error handling, testing, compiling, package management, and revision control).

There is just so much potential in plain jane text based programming that I find it odd that someone would so easily clump it into a single category and write it all off at the same time. It's a medium that can embrace everything from Java on the one hand to Haskell or lisp on the other, we haven't come anywhere close to reaching the limits of expressiveness available in text-based programming.


You can cast this entire comment in terms of hex/assembler vs C/Fortran and you get the same logical form.

We haven't come anywhere close to reaching the limits of expressiveness in assembler either, yet we've mostly given up on it for better things.

Try arguing the devil's argument position. What can you come up with that's might be better than text-based programming? Nothing? We're really in the best of all possible worlds?


I don't think it's fair to call him a Non-Coding Architect. Have you seen his other talks, or the articles he's published via his website http://worrydream.com ? Bret clearly codes.


But does he ship?


Sometimes not shipping gives us more freedom to explore.


I really wish he did. I think one of the greatest disservices he does himself is not shipping working code for the examples in his presentation. We've seen what and we're intrigued, but ship something that shows how so we can take the idea and run with it.


So, have you seen Media for Thinking the Unthinkable?

http://vimeo.com/67076984

The working code for the Nile viewer presented is on GitHub:

https://github.com/damelang/nile/tree/master/viz/NileViewer


I think the whole point of his series of talks is to inspire others to invent new things that not even he has thought of.


A delay in releasing code would be valuable then. Those too impatient to wait can start hacking on something new now and give lots of thought to this frontier and those that want to explore casually can do so a few months later when the source is released. Releasing nothing is a non-solution. Why make everyone else stumble where you have? That's just inconsiderate.

Dicebat Bernardus Carnotensis nos esse quasi nanos, gigantium humeris insidentes, ut possimus plura eis et remotiora videre, non utique proprii visus acumine, aut eminentia corporis, sed quia in altum subvenimur et extollimur magnitudine gigantea.


bingo, this remembers me of people not having time to get bored and then innovate by giving your mind some free space to go around. The typical scenario of the problem solution once you give it a break....


> But does he ship?

Why does that matter?


Fooling around with a paint brush in your study is fine, but real artist ship.

A bunch of ideas that sound great in theory are just that, it is only by surviving the crucible of the real world that ideas are validated and truly tested. When Guy Steele and James Gosling were the only software developers in the world who could program in Java, every Java program was a masterpiece. It is only once the tool was placed in the hands of mere mortals that its flaws were truly known.


Sometimes the journey is the product.

Walk around a good gallery. There are a pretty good number of pieces entitled "Study #3", or something of that sort. An artist is playing around with a tool, or a technique, trying to figure out something new.

Piano music is probably where this concept gets the most attention. Many études, such as those by Chopin, are among the most significant musical works of the era.


Yes, sometimes.

In another talk Bret claims that you basically cannot do visual art/design without immediate feedback. I was wondering how he thought people that create metal sculptures via welding, or carve marble, possibly work. It's just trivially wrong to assert you need that immediate feeback, and calls all of the reasoning into question.


Good point. I think programmers would be better off dropping the artistic pretensions altogether and accepting that they are much closer to engineers and architects in their construction of digital sandcastles.


and some artists create amazing art coding it in Processing; just take a look at Casey Reas's works.

also Beethoven wrote down his complex music quite often w/o using the instrument as he heard it in his mind...


You're forgetting about the hundred even thousands of painting they did that are not in the gallery. These paintings are the same as "shipping" even though you never see them in the gallery.

You can't play around with a tool or technique without actually producing something. You can talk about how a 47.3% incline on the brush gives the optimal result all day long, but it's the artist that actually paints that matters.


> Fooling around with a paint brush in your study is fine, but real artist ship.

Van Gogh didn't ship.


> Why does that matter?

Because I want to play with his Drawing Dynamic Viz demo. http://worrydream.com/DrawingDynamicVisualizationsTalkAddend...


He probably doesn't. He stays too much time not doing the machine work :)

The fact that you point "shipping" as a part of this discussion just shows how much he's right.


he is not allowed to talk about his ipad / Apple stuff. did TBL ship the W3C? protocols are the perfect example of shipping by design.


Typical NCA (Non-Coding Architect) stuff.

I assure you that devices of this sort require a great deal of code: http://cachepe.zzounds.com/media/quality,85/Ion_front-c20cdb...


Can you expand on what's wrong with declarative design? I'm not talking about UML, but modeling specifically.

Since I've been doing it for quite a few years I guess I know a thing or two about MDA/MDE. And it's not about disdain for writing code.


> And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that?

That is news to me. CouchDB is knee deep in Erlang and loving it. They are merging with BigCouch (from Cloudant) which is also full on Erlang.

Come to think of it, you are probably thinking of Couchbase, which doesn't really have much "couch" in except for name and couch's original author working on it.

> Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.

That is true however what is missing is that Erlang is optimized for _fault_tolerance_ first then, concurrency. Fault tolerance means isolation of resources and there is a price to pay for that. High concurrency, actor model, functional programming, immutable data, run-time code reloading all kind of flow from "fault tolerance first" idea.

It is funny, many libraries/languages/project that try to copy Erlang completely miss that one main point about and go on implementing "actors" run the good 'ol ring benchmark and claim "we surpassed Erlang, look at these results!". Yeah that is pretty amusing. I want to see them do a completely concurrent GC and hot code reloading (note: those are hard to add on, they have to be baked in to the language).


They also seem to miss the preemptive scheduling, built-in flow-control and per-process GC (which leads to minimal GC pauses). Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing. Similarly, no native-code solution can do so either: you need your runtime to be able to preempt user code at any point of time (i.e. Go is not a replacement for erlang).


> Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing.

Impossibility claims are very hard to prove and are often wrong, as in this case.

First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.

In addition, Azul's JVM guarantees no GC pauses larger than a few milliseconds (though it has no preemption guarantees).

But the fact of the matter is that even a vanilla HotSpot VM is so versatile and performant, that in practice, and if you're careful about what you're doing, you'll achieve pretty much everything Erlang gives you and lots more.

People making this claim (Joe Armstrong first among them) often fail to mention that those features that are hardest to replicate on the JVM are usually the less important ones (like perfect isolation of processes for near-perfect fault-tolerance requirements). But when it comes to low-latency stuff, the JVM can and does handily beat Erlang.

P.S. As one of the authors of said ring-benchmark-winning actor frameworks for the JVM, I can say that we do hot code swapping already, and if you buy the right JVM you also get a fully concurrent GC, and general performance that far exceeds Erlang's.


> First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.

That's why I said Sun JVM in first place. Azul and realtime Java are those purposely built VMs I mentioned.

Your claim about Sun JVM is more interesting. If it is so versatile why there are no network applications on JVM exist that provide at least adequate performance? Sure, JVM is blazing fast as far as code execution speed goes; the point is that writing robust zero copy networking code is so hard on JVM that this raw execution speed does not help.


I'm not sure what you mean when you say network applications that provide at least adequate performance. Aren't Java web-servers at the very top of every performance test? Isn't Java the #1 choice for low-latency high-frequency-trading applications? Aren't HBase, Hadoop and Storm running on the JVM?

The whole point of java.nio introduced over 10 years ago, back in Java 1.4, is robust zero-copy networking (with direct byte-buffers). Higher-level networking frameworks, like the very popular Netty, are based on NIO (although, truth be told, up until the last version of Netty, there was quite a bit of copying going on in there), and Netty is at the very top of high-performance networking frameworks in any language or environment.


> No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing

http://martinfowler.com/articles/lmax.html

I've spent a great deal of time trying to make a very similar erlang system reach 1/100 of the throughput/latency that the LMAX guys managed in pure java. There are days when I cry out in my sleep for a shared mutable variable.


If you need shared state to pass a lot of data between CPUs than erlang might not be a right solution; however the part that needs to do it can be isolated, implemented in C, and communicated with from BEAM.

What always amuses me about LMAX is the way they describe it (breakthrough! Invention!), while what they "invented" is a ring buffer and is the the solution everybody arrives to first. This is the way how all device drivers communicate with peripheral devices, for example; and fast IPC mechanism people used in UNIX for decades. Even more funny, that it takes less code to implement it in C from scratch than use LMAX library.


Your criticism seems to be framed against where we are at today.

As programmers we have a fragmented feedback cycle regardless of whether we are writing our software in Erlang or Lisp or C++.

While it is true that realistic matters like 'integration' and 'development velocity' are important enough in modern-day programming to determine what path we must take we shouldn't let it change our destination.

If you were to envision programming nirvana would it be mostly test coverage and scrum boards?


> If you were to envision programming nirvana would it be mostly test coverage and scrum boards?

Far from it. Indeed I think that TDD is vastly over-used and often harmful and SCRUM is more often development poison than anything else. But the fact that these things are popular despite the frequent difficulty of implementing them correctly is, I think, indicative of two things. First, that there is something of serious and fundamental value there which has caused so many people to latch onto such ideas zealously, even without fully understanding where the value in such ideas comes from. And second, that due to their being distanced from the "practice of programming" they are more subject to misinterpretation and incorrect implementation (this is a hard problem in programming as even the fundamentals of object oriented design aren't immune to such problems even though they tend to be baked into programming languages fairly deeply these days).

I think that unquestionably a routine build/test cycle is a massive aid to development quality. It doesn't just facilitate keeping a shipping product on schedule it has lots of benefits that diffuse out to every aspect of development in an almost fractal fashion. For example, having a robust unit test suite vastly facilitates refactoring, which makes it easier to improve code quality, which makes it easier to maintain and modify code, which makes it easier to add or change features, and so forth. It's a snowball effect. Similarly I think that unquestionably a source control system is a massive aid to development quality and the pace. That shouldn't be a controversial statement today though it would have been a few decades ago. More so I think that unquestionably the branching and merging capabilities of advanced source control systems are a huge aid in producing software.

Development velocity has a lot of secondary and higher order effects that impact everything about the software project. It makes it easier to change directions during development, it lowers the overhead for every individual contributor, and so on. Projects with higher development velocity are more agile, they are able to respond to end-user feedback and test feedback and are more likely to produce a reliable product that represents something the end-users actually want without wasting a lot of developer time along the way.

Some people have tried to formalize such "agile" processes into very specific sets of guidelines but I think for the most part they've failed to do so successfully, and have instead created rules which serve a far too narrow niche of the programming landscape and are also in many cases too vague to be applied reliably. But that doesn't mean that agility or increased development velocity in general are bad ideas, they are almost always hugely advantageous. But they need to be exercised with a great deal of thought and pragmatism.

Also, as to testing, it also suffers from the problem of being too distanced from the task of programming. There are many core problems in testing such as the fact that test code tends to be of lower quality than product code, the problems of untested or conflicting assumptions in test code (who tests the tests?), the difficulty of creating accurate mocks, and so on. These problems can, and should, be addressed but one of the reasons why they've been slow to be addressed is that testing is still seen as something that gets bolted onto a programming language, rather than something that is an integral part of coding.

Anyway, I've rambled too long I think, it's a deep topic, but hopefully I've addressed some of your points.


It's funny that you mention testing. TDD/BDD/whatever IS declarative programming, except you're doing the declarative-to-imperative translation yourself.

TDD has always felt sort of wrong to me because it really felt like I was writing the same code twice. Progress, in this regard, would be the spec functioning as actual code.


Characterizations like 'wrongheadedness' have no part in this discussion. If his conclusions are wrong you can explain why without generalizing to his nature as a person.


"Worse is better", aka "New Jersey style", as in RPG's famous essay? [1]

[1] http://www.dreamsongs.com/WorseIsBetter.html


Daniel Weinreb's blog post response to "Worse is Better" is worth reading. DLW was "the MIT guy" and was a news.yc user (dlweinreb).

http://web.archive.org/web/20110709052759/http://danweinreb....


Precisely.


His "computers should figure out how to talk to each other" immediately reminded me the "computers should heal themselves" one finds in "objects have failed" from the same author. Both shells seem equally empty to me.

Also, if you want more fuel, you might find funny that he refers to GreenArrays in his section about parallel computing. Chuck Moore, the guy behind it, is probably the last and ultimate "binary programmer" on this planet. But at the same time, he invented a "reverse syntax highlighting", where you set the colors of your tokens in order to set their functionq, in a non-plain-text-source system (see ColorForth).


I have no idea why you're calling Chuck Moore a "binary programmer", by the definition given in today's talk.

Forth is anything but machine code. Forth and Lisp both share the rare ability to describe both the lowest and the highest layers of abstraction equally well.


Chuck Moore is definitely an interesting guy. It's hard to stereotype him, but he is definitely closer to the metal than most other language designers.


For one thing, Forth is the machine code for the chips he designs. Moreover, in his various iterations of his systems on the x86, he was never afraid to insert hex codes in his source when he needed too, typically in order to implement his primitives, because he judged that an assembler was unnecessary. At one point he tried to build a system in which he coded in something rather close to object code. This system led him to his colorForth, in which you actually edit the object code with a specialized editor that makes it look like you're editing normal source code.

Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.


Bret Victor's talk wasn't about any particular technology. It was about being able to change your mind. It's not important that "binary programmers" programmed in machine code. It's important that they refused to change their minds. We should avoid being "binary programmers" in this sense.

> For one thing, Forth is the machine code for the chips he designs.

You're right, I should've said Forth isn't just machine code.

> Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.

This is a misunderstanding. He rejects complex programming hierarchies, wishing instead to simply have a programmer-Forth interface and a Forth-machine interface. He describes programming in Forth as building up the language towards the problem, from a lower level to a higher level:

"The whole point of Forth was that you didn't write programs in Forth, you wrote vocabularies in Forth. When you devised an application, you wrote a hundred words or so that discussed the application, and you used those hundred words to write a one line definition to solve the application. It is not easy to find those hundred words, but they exist, they always exist." [1]

Also:

"Yes, I am struck by the duality between Lisp and Lambda Calculus vs. Forth and postfix. But I am not impressed by the productivity of functional languages." [2]

Here's what others have said:

"Forth certainly starts out as a low-level language; however, as you define additional words, the level of abstraction increases arbitrarily." [3]

Do you consider Factor a Forth? I do.

"Factor allows the clean integration of high-level and low-level code with extensive support for calling libraries in other languages and for efficient manipulation of binary data." [4]

1. http://c2.com/cgi/wiki?ForthValues

2. http://developers.slashdot.org/story/01/09/11/139249/chuck-m...

3. http://c2.com/cgi/wiki?ForthVsLisp

4. http://factorcode.org/littledan/dls.pdf


Absolutely. I was waiting for him to mention what I think of as the Unix/Plan 9/REST principle the whole time. IMO this is one of the most important concepts in computing, but too few people are explicitly aware of it. Unfortunately he didn't mention it.

Really what Victor is complaining about is the web. He doesn't like the fact that we are hand-coding HTML and CSS in vim instead of directly manipulating spatial objects. (Although HTML is certainly declarative. Browsers actually do separate intent from device-specific details. We are not writing Win32 API calls to draw stuff, though he didn't acknowledge that.)

It has been impressed on me a lot lately how much the web is simply a distributed Unix. It's built on a file-system-like addressing scheme. Everything is a stream of bytes (with some additional HTTP header metadata). There are bunch of orthogonal domain-specific languages (HTML/CSS/etc vs troff/sed/etc). They both have a certain messiness, but that's necessary and not accidental.

This design is not accidental. It was taken from Unix and renamed "REST". The Unix/Plan 9/REST principle is essentially the same as the Alan Perlis quote: "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." [1] The single data structure is the stream of bytes, or the file / file descriptor.

For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).

Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.

Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.

I would posit that Windows and certain other software ecosystems have reached a fundamental scaling limit because of the O(M*N) explosion. Even if you have $100 billion, you can't write enough code to cover this space.

Another part of this is the dichotomy between visually-oriented people and language-oriented people. A great read on this schism is: http://www.cryptonomicon.com/beginning.html . IMO language-oriented tools compose better and abstract better than visual tools. In this thread, there is a great point that code is not 2D or 3D; it has richer structure than can really be represented that way.

I really like Bret Victor's talks and ideas. His other talks are actually proposing solutions, and they are astounding. But this one comes off more as complaining, without any real solutions.

He completely misunderstands the reason for the current state of affairs. It's NOT because we are ignorant of history. It's because language-oriented abstractions scale better and let programmers get things done more quickly.

That's not to say this won't change, so I'm glad he's working on it.

[1] http://www.cs.yale.edu/quotes.html


> Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.

Lists are not very important for Lisp, apart from writing macros.

> Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.

Haskell has even more types, and no exloding codebases. The `M * N explosion' is handled differently there.

> For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).

You'd use plugins and common interfaces. (I'm all in favour of text, but the alternative is still possible, if hard.)


> Lists are not very important for Lisp, apart from writing macros.

I'm not sure I agree. Sure, in most dialects you are given access to Arrays, Classes, and other types that are well used. And you can choose to avoid lists, just like you can avoid using dictionaries in Python, and Lua. But I find that the cons cell is used rather commonly in standard Lisp code.


You can't --really-- avoid dictionaries in python, as namespaces and classes actually are dictionaries, and can be treated as such.

In Lua, all global variables are inserted into the global dictionary _G, which is accessible at runtime. This means you can't even write a simple program consisting of only functions becouse they are all added and exectued from that global dictionary.

There where also other languages which could have been mentioned. In Javascript for instance, functions and arrays are actually just special objects/dictionaries. You can call .length on a function, you can add functions to the prototype of Array.


Those are just implementation details. They're not really relevant to the way your program is constructed or the way you reason about it.


I think Haskell handles the combinations explosion with its polymorphic types and higher-order abstractions. There are many, many types, but there are also abstractions over types. Java/C++ do not get that. `sort :: Ord a => [a] -> [a]` works for infinite amount of types that have `Ord` instance.

I don't agree that lists are not very important for Lisp, they're essential for functional programming as we know it today.


It's not an either-or. My prediction is that Victor's tools will be an optional layer on top of text-based representations. I'd go as far as to say that source code will always be represented as text. You can always build Visual Studio and IntelliJ and arbitrarily complex representations on top of text. It's just that it takes a lot of engineering effort, and the tools become obsolete as new languages are developed. We HAD Visual Studio for VB; it's just that everyone moved onto the web and Perl/Python/Ruby/JS, and they got by fine without IDEs.

There are people trying to come up with a common structured base for all languages. The problem is that if it's common to all languages, then it won't offer much more than text does. Languages are that diverse.

I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size". That said, the design of C++ STL is basically to avoid the M*N explosion with strong types. It is well done but it also causes a lot of well-known problems. Unfortunately most C++ code is not as carefully designed as the STL.


>I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size".

What threshold?

>It is well done but it also causes a lot of well-known problems.

Like what? And why do you assume those problems are inherent to having types?


Lists are not very important for Lisp, apart from writing macros.

Or in other words, you haven't quite grokked Lisp yet. The macros are the point!


>Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion.

I think haskell and friends demonstrate that your explanation for java and C++ "exploding" is incorrect. Haskell is all about types, lots of types, and making your own types is so basic and simple that it happens all the time everywhere. Yet, there is no code explosion.


See my comment below about C++ STL. There are ways to avoid the combinatorial explosion with strong types, but there are also downsides.


@InclinedPlane: I would suggest to you to ask yourself one question: what is the difference between a programmer and a user? if I code in language XY I'm already a consumer of a library called XY (and the operating system and the global network). most "programmers" today have nothing to do with memory (and the hardware of course). the next big thing is never just a simple iteration of the current paradigm. the problem with many ideas he mentions were not practical for a long time. On the other hand much of computing has simply to do with conventions (protocols of different kinds).


Some ideas worth mentioning are in Gerry Sussman's video.

The link in the presentation.


To add to the UNIX thought, it goes beyond text configuration--the very design of system calls that can fail with EINTR error code was a kind of worse is better design approach.


> Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang.

I haven't seen the talk yet and just browsed the slides, but just from your description Mozart/Oz could also fit the bill since it was designed for distributed/concurrent programming as well. Furthermore, Oz's "Browser" has some f-ing cool interactive stuff made possible due to the specific model of concurrency in the system. I must say that programming in Mozart/Oz feels completely different to Erlang, despite that fact that both have a common origin in Prolog.

<edit: adding more ..>

> He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software.

There is a school of thought whereby "programming" is the act of coding itself. To put it in other words, it is a process of manipulating a formal system to cause effects in the world. That system could be a linear stream of symbols, or a 2D space of tiles, or any of myriad forms, but in the end much of the "pleasure of programming" is attributable to the possibility of play with such a system.

To jump a bit ahead, consider the Leap Motion controller. What if we had a system built where we can sculpt 3D geometries and had a way to map these "sculptures" to programs for doing various things? I say this 'cos "programming", a lot of the times, feels like origami to me when I'm actually coding. Lisps, in particular, evoke that feeling strongly. So, I'm excited about Leap Motion for the potential impact it can have on "programming".

I think representations are important, and the "school of direct manipulation" misses this point. Just because we have great computing power at our finger tips today, we won't revert to using roman numerals for numbers. One way to interpret the claims of proponents of direct manipulation is that programming ought to be a dialogue between a representation and the effect on the world instead of a monologue or, at best, a long distance call.

Bret has expressed favour for dynamic representations in some of his writings, but I'm not entirely sure that they are the best for dynamic processes. There is nothing uncool about static representations like code. (Well, that's all we've had for ages now, anyway.) What we've been lacking is a variety of static representations, since language has been central to our programming culture and history. What would an alien civilization program in if they had multidimensional communication means?

To conclude, my current belief is that anyone searching for "the one language" or "the one system" to rule them all is trying to find Joshu's "Mu" by studying scriptures. Every system (a.k.a. representation) is going to have certain aspects that it handles well and certain others that it does poorly on. That ought to be a theorem or something, but I'm not sophisticated enough, yet, to formally articulate that :)


Ok, just saw the talk and Bret's certainly referring to Erlang here.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: