Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I don't agree with your premise, that I haven't addressed any of their criticisms.

A main theme underlying their complaint is that there's "numerous examples of substantial and fundamental innovation happening all the time."

But Bret Victor clearly knows this. Obviously, he does not think every-single-person-in-the-world has failed to pursue other computational models. The question is, how does the mainstream programming culture react to them? With hostility? Aggressive ignorance? Is it politically hard for you to use these ideas at work, even when they appear to provide natural solutions?

Do we live in a programming culture where people choose the technologies they do, after an openminded survey of different models? Does someone critique the complectedness of the actor model, when explaining why they decided to use PHP or Python? Do they justify the von Neumann paradigm, using the Connection Machine as a negative case study?

There are other shaky points on these HN threads. For instance, inferring that visual programming languages were debunked, based on a few instances. (Particularly when the poster doesn't, say, evaluate what was wrong with the instances they have in mind, nor wonder if they really have exhausted the space of potential visual languages.)



@cali: I completely agree with your points. @InclinedPlane is missing the main argument.

Here is my take: TLDR: Computing needs an existential crisis before current programming zeitgeist is replaced. Until then, we need to encourage as many people as possible to live on the bleeding edge of "Programming" epistemology.

Long Version: For better or for worse, humans are pragmatic. Fundamentally, we don't change our behavior until there is a fire at our front door. In this same sense, I don't think we are going to rewrite the book on what it means to "program," until we reach an existential peril. Intel demonstrated this by switching to multicore processors after realizing Moore's law could simply not continue via a simple increase in clock speed.

You can't take one of Bret's talks as his entire critique. This talk is part of a body of work in which he points out and demonstrates our lack of imagination. Bret himself points out another seemingly irrelevant historical anecdote to explain his work: Arab Numerals. From Bret himself:

"Have you ever tried multiplying roman numerals? It’s incredibly, ridiculously difficult. That’s why, before the 14th century, everyone thought that multiplication was an incredibly difficult concept, and only for the mathematical elite. Then arabic numerals came along, with their nice place values, and we discovered that even seven-year-olds can handle multiplication just fine. There was nothing difficult about the concept of multiplication—the problem was that numbers, at the time, had a bad user interface."

Interestingly enough, the "bad user interface" wasn't enough to dethrone roman numerals until the renaissance. The PRAGMATIC reason we abandoned roman numerals was due to the increased trading in the Mediterranean.

Personally, I believe that Brett is providing the foundation for the next level of abstraction that computing will experience. That's a big deal. Godspeed.


Perhaps. But I think he is a visual thinker (his website is littered with phrases like "the programmer needs to see....". And that is a powerful component of thinking, to be sure. But, think about math. Plots and charts are sometimes extremely useful, and we can throw them up and interact with them in real time with tools like Mathcad. Its great. But, it only goes so far. I have to do math (filtering, calculus, signal processing) most every day at work. I have some Python scripts to visualize some stuff, but by and large I work symbolically because that is the abstraction that gives me the most leverage. Sure, I can take a continuous function that is plotted, and visually see the integral and derivative, and that can be a very useful thing. OTOH, if I want to design a filter, I need to design it with criteria in mind, solve equations and so on, not put an equation in a tool like mathcad, and tweek coefficients and terms until it looks right. Visual processing falls down for something like that.

Others have posted about the new IDEs that they are trying to create. Great! Bring them to us. If they work, we will use them. But I fundamentally disagree with the premise that visual is just flat out better. Absolutely, have the conversation, and push the boundaries. But to claim that people that say "you know, symbolic math actually works better in most cases" are resisting change (you didn't say that so much as others) is silly. We are just stating facts.

Take your arabic numbers example. Roman numerals are what, essentially VISUAL!! III is 3. It's a horrible way to do arithmetic. Or imagine a 'visual calculator', where you try to multiply 3*7 by stacking blocks or something. Just the kind of thing I might use to teach a third grader, but never, ever, something I am going to use to balance my checkbook or compute loads on bridge trusses. I'm imagining sliders to change the x and y, and the blocks rearranging themselves. Great teaching tool. A terrible way to do math because it is a very striking, but also very weak abstraction.

Take bridge trusses. Imagine a visual program that shows loads in colors - high forces are red, perhaps. A great tool, obviously. (we have such things, btw). But to design a bridge that way? Never. There is no intellectual scaffolding there (pun intended). I can make arbitrary configurations, look at how colors change and such, but engineering is multidimensional. What do the materials cost? How hard are they to get and transport? How many people will be needed to bolt this strut? How do the materials work in compression vs expansion? What are the effects of weather and age? What are the resonances. It's a huge optimization problem that I'm not going to solve visually (though, again, visual will often help me conceptualize a specific element). That I am not thinking or working purely visually is not evidence that I am not being "creative" - I'm just choosing the correct abstraction for the job. Sometimes that is visual, sometimes not.

So, okay, the claim is that perhaps visual will/should be the next major abstraction in programming. I am skeptical, for all the reasons above - my current non-visual tools provide me a better abstraction in so many cases. Prove me wrong, and I will happily use your tool. But please don't claim these things haven't been thought of, or that we are being reactionary by pointing out the reasons we choose symbolic and textual abstractions over visual ones when we have the choice (I admit sometimes the choice isn't there).


Bret has previously given a talk[1] that addresses this point. He discusses the importance of using symbolic, visual, and interactive methods to understand and design systems. [2] He specifically shows an example of digital filter design that uses all three. [3]

Programming is very focused on symbolic reasoning right now, so it makes sense for him to focus on visual and interactive representations and interactive models often are intertwined with visual representation. His focus on a balanced approach to programming seems like a constant harping on visualization because of this. I think he is trying to get the feedback loop between creator and creation as tight as possible and using all available means to represent that system.

The prototypes I have seen of his that are direct programming tend not to look like LabView, instead they are augmented IDEs that have visual representations of processing that are linked to the symbolic representations that were used to create them. [4] This way you can manipulate the output and see how the system changes, see how the linkages in the system relate, and change the symbols to get different output. it is a tool for making systems represented by symbols but interacting with the system can come through a visual or symbolic representation.

[1] http://worrydream.com/MediaForThinkingTheUnthinkable/note.ht...

[2] http://vimeo.com/67076984#t=12m51s

[3] http://vimeo.com/67076984#t=16m55s

[4] http://vimeo.com/67076984#t=25m0s


Part of Bret's theory of learning (which I agree with) is that when "illustrating" or "explaining" an idea it is to important use multiple simultaneous representations, not solely symbolic and not solely visual. This increases the "surface area" of comprehension so that a learner is much more likely to find something in this constellation of representations that relate to their prior understanding. In fact, that comprehension might only come out of seeing the constellation. No representation alone would have sufficed.

Further, you then want to build a feedback loop by allowing direct manipulation of any of the varied representations and have the other representations change accordingly. This not only lets you see the same idea from multiple perspectives -- visual, symbolic, etc. -- but lets the learner see the ideas in motion.

This is where the "real time" stuff comes in and also why he gets annoyed when people see the it as the point of his work. It's not; it's just a technology to accelerate the learning process. It's a very compelling technology, but it's not the foundation of his work. This is like reducing Galileo to a really good telescope engineer -- not that Bret Victor is Galileo.

I think he emphasizes the visual only because it's so underdeveloped relative to symbolic. He thinks we need better metaphors, not just better symbols or syntax. He's not an advocate of working "purely visually." It's the relationship between the representations that matters. You want to create a world where you can freely use the right metaphor for the job, so to speak.

That's his mission. It's the mission of every constructivist interested in using computers for education. Bret is really good at pushing the state of the art which is why folks like me get really excited about him! :D

You might not think Bret's talks are about education or learning, but virtually every one is. A huge theme of his work is this question: "If people learn via a continual feedback loop with their environment -- in programming we sometimes call this 'debugging' -- then what are our programming environments teaching us? Are they good teachers? Are they (unknowingly) teaching us bad lessons? Can we make them better teachers?"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: