Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a fun talk by Bret and I think he echoes a lot of the murmurings that have been going around the community lately. It's funny that he latched onto some of the same core tenants we've been kicking around, but from a very different angle. I started with gathering data on what makes programming hard, he looked at history to see what made programming different. It's a neat approach and this talk laid a good conceptual foundation for the next step: coming up with a solution.

In my case, my work on Light Table has certainly proven at least one thing: what we have now is very far from where we could be. Programming is broken and I've finally come to an understanding of how we can categorize and systematically address that brokeness. If these ideas interest you, I highly encourage you to come to my StrangeLoop talk. I'll be presenting that next step forward: what a system like this would look like and what it can really do for us.

These are exciting times and I've never been as stoked as I am for what's coming, probably much sooner than people think.

EDIT: Here's the link to the talk https://thestrangeloop.com/sessions/tbd--11



APL. Start there. Evolve a true language from that reference plane. By this I mean with a true domain-specific (meaning: programming) alphabet (symbols) that encapsulate much of what we've learned in the last 60 years. A language allows you to speak (or type), think and describe concepts efficiently.

Programming in APL, for me at least, was like entering into a secondary zone after you were in the zone. The first step is to be in the "I am now focused on programming" zone. Then there's the "I am now in my problem space" zone. This is exactly how it works with APL.

I used the language extensively for probably a decade and nothing has ever approached it in this regard. Instead we are mired in the innards of the machine, micromanaging absolutely everything with incredible verbosity and granularity.

I really feel that for programming/computing to really evolve to another level we need to start loosing some of the links to the ancient world of programming. There's little difference between what you had to do with a Fortran program and what you do with some of the modern languages in common use. That's not he kind of progress that is going to make a dent.


> Instead we are mired in the innards of the machine micromanaging absolutely everything with incredible verbosity.

This is one area where Haskell really shines. If you want the machine to be able to do what you want without micromanaging how, than you need a way to formally specify what you mean in an unambiguous and verifiable way. Yet it also needs to be flexible enough to cross domain boundaries (pure code, IO, DSLs, etc).

Category theory has been doing exactly that in the math world for decades, and taking advantage of that in the programming world seems like a clear way forward.

The current state of the industry seems like team of medieval masons (programmers) struggling to build a cathedral with no knowledge of physics beyond anecdotes that have been passed down the generations (design patterns), while a crowd of peasants watch from all angles to see if the whole thing will fall down (unit tests).

Sure, you might be able to build something that way, but it's not exactly science, is it?


This is the kind of talk from Haskell folks that I find incredibly annoying. Where's Haskell's Squeak? Where's Haskell's Lisp Machine? It doesn't take much poking around to find out that non-trivial interactive programming like sophisticated games and user interfaces is still cutting very edge stuff.

Gimme a break.


I'm sorry, but you're upset because folks are passionate about a language that brings new perspective, and maybe is not exactly as useful in some areas as existing solutions? This is exactly the kind of attachment Bret warns about.


I don't think I expressed attachment to any particular solution or approach - I simply pointed out an extremely large aspect of modern software engineering where Haskell's supposed benefits aren't all that clear. So who's attached?


This strikes me as being mad the Tesla doesn't compete in F1.


Are you trying to compare interactive software, one of the dominant forms of programs and widely used by billions of people every day, to formula 1 cars, an engineering niche created solely for a set of artificial racing criteria?

A better analogy would be being mad that the Tesla can't drive on the interstate.


"sophisticated games" pretty specifically implies contemporary 3d gaming, which is not a useful criteria for exploring a fundamental paradigm shift in programming.


The fact that you think a lisp machine is an "extremely large aspect of modern software engineering" certainly makes me feel that you are expressing an attachment to a particular approach.


I'm not saying it's all there yet, just that it's a way forward.


We have many beautiful cathedrals don't we? So it is a bonafide fact that you can build something with the current state of industry. As far as the analogy goes I would alter it in that the peasants aren't simply watching, but poking the masonry with cudgels. Lastly, scientific methods of building aren't necessarily better, while they follow an order that is rooted in a doctrine, I can quickly think of all those scientifically built rockets that exploded on launch. To play devil's advocate, I'm not convinced that a scientific method is a better one than the current haphazard one we have in place for development.


I would think a major benefit of a scientific method would be the ability to measure performance. Without measurement, how can we progress.

Don't confuse local maxima for maxima. We need people exploring other slopes for the chance of an apex, or at least some higher local maxima.


I think your architecture metaphor is apt.


What makes APL different from, say, Lisp or Haskell? Do you have tutorials to recommend?


It's very hard to find good tutorials on APL because it's not very popular and most of its implementations are closed-source and not compatible with each other's language extensions, but it's most recognizable for its extreme use of non-standard codepoints. Every function in APL is defined by a single character, but those characters range from . to most of the Greek alphabet (taking similar meanings as in abstract math) to things like ⍋ (sort ascending). Wikipedia has a few fun examples if you just want a very brief taste; you can also read a tutorial from MicroAPL at http://www.microapl.com/apl/tutorial_contents.html

It's mostly good for being able to express mathematical formulas with very little translation from the math world - "executable proofs," I think the quote is - and having matrices of arbitrary dimension as first-class values is unusual if not unique. But for any practical purpose it's to Haskell what Haskell is to Java.


> But for any practical purpose it's to Haskell what Haskell is to Java.

Can you elaborate on this? As I understand, the core strengths of APL are succinct notation, built-in verbs which operate on vectors/matrices, and a requirement to program in a point-free style. All of this can be done in Haskell.


A Java programmer unfamiliar with Haskell looks at a Haskell program and shouts, "I can't make even the slightest bit of sense out of this!"

A Haskell programmer unfamiliar with APL looks at an APL program and...


Most Haskell programmers should be familiar with right-to-left point-free style, and should be able to infer that symbols stand in for names.

Of course, understanding the individual symbols is a different matter, but hardly requiring a conceptual leap.


>A Haskell programmer unfamiliar with APL looks at an APL program and...

And says "what's the big deal?". That's exactly the question, what is the big deal. APL isn't scary, I'm not shouting "I can't make sense of this", I am asking "how is this better than haskell in the same way haskell is better than java?".


I'm not really interested in debating the reaction of an imagined Haskell programmer. I was just restating what the grandparent's analogy meant.

Your question is fine, but not what he meant by the analogy.


I'm not imagined, I am real. I know you were restating the analogy, the problem is that the analogy is wrong. I can't find anything about APL that a haskell developer would find new or interesting or frightening or anything like that.


Ok.


More esoteric organization/concepts for anyone coming from the C family (which is basically everyone), more out-there notation, more deserving of the title "write-only," and less ability to do anything you might want to do with a real computer beyond using it as a calculator. I wouldn't want to do much work with Haskell's GTK bindings, but at least they exist.


That tutorial is deeply unimpressive. It seems very excited about APL having functions, and not directly mapping to machine-level constructs. In 1962 I can imagine that being impressive (if you weren't familiar with Lisp or ALGOL); today, not so much. The one thing that does seem somewhat interesting is the emphasis it puts on "operators" (i.e., second-order functions). This is obviously not new to anyone familiar with functional programming, but I do like the way that tutorial jumps in quite quickly to the practical utility of a few simple second-order functions (reduce, product, map).


Like I said, it's hard to find good ones; I didn't say I had succeeded. I learned a bit of it for a programming language design course, but I never got beyond the basic concepts.


Definitely watch this video http://www.youtube.com/watch?v=a9xAKttWgP4


APL has its own codepage? I have to say, that's a better and simpler way of avoiding success at all costs than Haskell ever found.

Not that I dislike the idea -- on the contrary, I'm inclined to conclude from my excitement over this and Haskell that I dislike success...


Well in the end it doesn't matter if your language is looking for popularity or not. What matters is what you can do with it. You think a language with weird symbols all around can't win? Just look at Pearl.

On a related note, if one plans to sell the Language of The Future Of Programming, I swear this thing will know the same fate as Planner, NLS, Sketchpad, Prolog, Smalltalk and whatnot if it cannot help me with the problems I have to solve just tomorrow.


Try J. Or Kona (open source K). All ascii characters.


Haskell has a rule - to avoid popularity at all costs


You should parse it as "avoid (popularity at all costs)" rather than "(avoid popularity) at all costs".


Well of course it does. Popularity is a side effect.


All the decent tutorials that I know of were in book form. Unless someone's scanned them they're gone. I know mine got destroyed in a flooded basement.

Now, I didn't learn APL from a tutorial, I learned it (in 1976) from a book. This book: http://www.jsoftware.com/papers/APL.htm from 1962.

If my memory hasn't been completely corrupted by background radiation, I've seen papers as early as the mid 1950s about this notation.

APL started out as a notation for expressing computation (this is not precise but good enough). As far as I'm concerned it's sitting at a level of abstraction higher than Haskell (arguably like a library overtop Haskell).

Now, in the theme of this thread, APL was able to achieve all of this given the constraints at the time.

The MCM/70 was a microprocessor based laptop computer that shipped in 1974 (demonstrated in 1972, some prototypes delivered to customers in 1973) and ran APL using an 80 kHz (that kilo) 8008 (with a whole 8 bytes of stack) with 2 kBytes (that's kilo) RAM or maxed out at 8 kB (again, that's kilo) of RAM. This is a small slow machine that still ran APL (and nothing else). IEEE Annals of Computer History has this computer as the earliest commercial, non-kit personal computer (IEEE Annals of the History of Computing, 2003: pg. 62-75). And, I say again, it ran APL exclusively.

Control Data dominated the super computer market in the 70s. The CDC 7600 (designed by Cray himself, 36.4 MHz with 65 kWord (a word was some multiple of 12 bits, probably 60 bits but I'm fuzzy on that) and about 36 MFLOPS according to wikipedia) was normally programmed in FORTRAN. In fact, this would be a classic machine to run FORTRAN. However, the APL implementation available was often able to outperform it, almost always when coded by an engineer (and I mean like civil, mechanical, industrial, etc engineer, not a software engineer) rather than someone specialising in writing fast software.

I wish everyone would think about what these people accomplished given those constraints. And think about this world and think again about Bret Victor's talk.


Thank you. Please consider writing a blog so that this knowledge doesn't disappear.

Were those destroyed tutorials published books?


The ones I remember were all books. At the time, I thought this was one of the best books available: http://www.amazon.com/APL-Interactive-Approach-Leonard-Gilma... -- but I don't know if I'd pay $522 for it... actually I do know, and I wouldn't. The paper covered versions are just fine, and a much better price :-)

EDIT: I just opened the drop down on the paper covered versions. Prices between $34.13 and $1806.23!!! Is that real?!? Wow, I had five or six copies of something that seems to be incredibly valuable. Too late for an insurance claim on that basement flood.


Probably Amazon bot bidding wars: http://www.michaeleisen.org/blog/?p=358


I'd say abstraction and notation. Start here:

http://www.jdl.ac.cn/turing/pdf/p444-iverson.pdf


Grumble. I understand that you can't really share the talk before the conference, but man, you're teasing.

Share a thought or two with the peons who won't/can't travel :)


Haha it sucks actually - I love talking about this stuff, but I know I really need to save it for my talk so I'm tearing myself to pieces trying to keep quiet.

I guess one thing I will say is that our definition of programming is all over the place right now and that in order for us to get anywhere we need to scale it back to something that simplifies what it means to program. We're bogged down in so much incidental complexity that the definitions I hear from people are convoluted messes that have literally nothing to do with solving problems. That's a bad sign.

My thesis is that given the right definition all of the sudden things magically "just work" and you can use it to start cleaning up the mess. Without giving too much away I'll say that it has to do with focusing on data. :)


I feel the same way. As a programmer, I feel like there are tons of irrelevant details I have to deal with every day that really have nothing to do with the exercise of giving instructions to a computer.

That's what inspired me to work on my [nameless graph language](http://nickretallack.com/visual_language/#/ace0c51e4ee3f9d74...). I thought it would be simpler to express a program by connecting information sinks to information sources, instead of ordering things procedurally. By using a graph, I could create richer expressions than I could in text, which allowed me to remove temporary variables entirely. By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.

Also, by avoiding text and names, I avoid many arguments about "coding style", which I find extremely stupid.

I find that people often argue about programming methodologies that are largely equivalent and interchangeable. For example, for every Object Oriented program, there is an equivalent non-object-oriented program that uses conditional logic in place of inheritance. For every curried program, there is an equivalent un-curried program that explicitly names its function arguments. In fact, it wouldn't even be that hard to write a program to convert from one to the other.

I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything. Not sure how to apply this to my own language yet, but I'll think of something.


Have you used any of the graph ("dataflow") environments in common usage?

Max/MSP, Pure Data, vvvv, meemoo, Quartz Composer, Touch Designer, LabView, Grasshopper, WebAudioToy, just to name a few.


I have. I should play with them more, since I don't quite get DataFlow yet.

I'm used to JavaScript, so that's what I based my language on. It's really a traditional programming language in disguise, kinda like JavaScript with some Haskell influence. It's nothing like a dataflow language. On that front, perhaps those languages are a lot more avante-guarde than mine.


> I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything.

Chuck Moore, the inventor of Forth, is working on these processors.

http://www.greenarraychips.com/home/products/index.html

It hasn't been an easy road.

http://colorforth.com/blog.htm


>By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.

I've been trying to do something similar with a pet language :) Human names should never touch the compiler, they are annotations on a different layer.

But writing an editor for such a programming environment with better UX and scalability than a modern text-based editor is... an engineering challenge.


What do you think of my graph editor?

It's not perfect, and making lambdas is still a little awkward because I haven't made them resizable. Also, eventually I'd like the computer to automatically arrange and scale the nodes for you, for maximum readability. But I think it's pretty fun to use. It'd probably be even more fun on an iPad.

I'd love to make my IDE as fun to use as DragonBox


I think it's really nice! Usually these flow-chart languages have difficult UI, but this one is pretty easy to mess around in.

It would be good if, while clicking and dragging a new connection line that will replace an old one, the latter's line is dimmed to indicate that it will disappear. Also, those blue nodes need a distinguishing selection color.

It sounds like you're aiming more toward a fun tablet-usable interface, but:

Have you thought about what it would take to write large programs in such an editor? For small fun programs a graph visualization is cool, but larger programs will tend toward a nested indented structure (like existing text) for the sake of visual bandwidth, readability of mathematical expressions, control flow, etc.


There actually is a rather large program in there: http://nickretallack.com/visual_language/#/f2983238d90bd3e0a...

Use the arrow keys to move the box around. I suppose that's still a bit primitive, but I'll make some more involved programs once I fix up the scoping model a bit.

When I first started on this project, I thought at some point I would need to make a "zoom out" feature, because you might end up making a lot of nodes in one function. However, I have never needed this. As soon as you have too much stuff going on, you can just box-select some things and hit the "Join" button to get a new function. The restricted workspace actually forces you to abstract things more, and the lack of syntax allows you to reduce repetition more than would be practical in a textual language.

For example, in textual languages, reducing repetition often requires you to introduce intermediate variables, which could actually make the program's text longer, so people will avoid doing it. However, in my language you get intermediate variables by connecting two sinks to the same source. The addition in program length hardly noticeable.


Your visual language looks a lot like labview. I'm on a phone on a bus and haven't dug in too far, but have you used labview before?


I'd like to try labview, but doesn't it cost lots of money? I guess I'll sign up for an evaluation copy.

The closest things to my language that I have seen are Kismet and UScript. Mine is different though because it is lazily evaluated and uses recursion as the only method of looping.

Some other things that look superficially similar such as Quartz Composer, ThreeNode, PureData, etc. are actually totally different animals. They are more like circuit boards, and my language is a lot more like JavaScript.


Buy yourself a LEGO Mindstorms NXT kit. $200 USD. The LEGO NXT programming environment is a scaled-down version of LabVIEW.


Nice but still classical programming. I personally think if statements are the problem. The easiest languages are trivial ones with no branching. Not Turing complete, but they rock when applicable e.g. html, gcode


That's true. I intended it to be feature-comparable with JavaScript, since I think JavaScript is a pretty cool language, and that is what it is interpreted in.

I don't think it is possible to make a program without conditional branches.

Somebody posted a link below about "Data Driven" design in C++. In it was an example of a pattern where each object has a "dirty" flag, which determines whether it needs processing, but they found that failing branch prediction here took more cycles than simply removing the branch.

My thought was, instead, what if you created two versions of that method -- one to represent when the dirty flag is true, and another to represent when the dirty flag is false -- and then instead of toggling the dirty flag, you could change the jump address for that method to point to the version it should use. If this toggle happens long enough before the the processor calls that method, you would remove any possibility of branch prediction failure =].

I have no idea if this is practical or not, but it is amusing to consider programs that modify jump targets instead of using traditional conditional branches.


In actual compiled code, conditional branches (without branch prediction) are translated to jumps to different targets, which are specified inline with the instructions. Specifying a modifiable target would mean fetching it from a register (or worse, memory) and delaying execution until the fetch is complete (several cycles minimum on a pipelined machine). With branch prediction, instructions are predicated on a condition inline and we avoid the costly jump instructions.

Read more: http://en.wikipedia.org/wiki/Branch_predication EDIT: Also: http://en.wikipedia.org/wiki/Branch_predictor

I think we more commonly use the latter, which tries to guess which way the code will branch and load the appropriate jump target. It's actually typically very successful in modern processors.


What if it modified the program text, instead of some external load-able value? Self-modifying code could allow explicit branch prediction.


I think we need to shift to a different model ... like liquid flow. Liquid dynamics are non-linear (ie. computable), yet a river has no hard IF/ELSE boundaries, it has regions of high pressure etc. which alter its overall behaviour. You can still compute using (e.g. graph) flows, but you don't get the hard edges which are a cause of bugs. Of course it won't be good at building CRUD applications, but it would be natural for a different class of computational tasks (e.g. material stress, neural networks)

(PS angular-js has done all that dirty flag checking if you like that approach)


I don't get it, why are you afraid of scooping yourself?

In the same way that HN frowns upon stealth startups, shouldn't we frown upon 'stealth theories'? If your thoughts are novel and deep enough, revealing something about them will only increase interest in your future talks, since you are definitionally the foremost thinker in your unique worldview. If the idea fails scrutiny in some way, you should want to hear about it now so you can strengthen your position.

What's the downside, outside of using mystery to create artificial hype?


can't speak for the thread starter, but one downside to prematurely talking about something is confusion. Half formed thoughts, rambling imprecise language, etc can create confusion for his audience. The process of editing and preparing for a talk makes it more clear and concise. Maybe he is not ready yet to clearly communicate his concepts yet.


This reminds me of the "data-oriented design" debate that's been raging in game programming for a while. I assume you've seen e.g. http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfal... (or similar papers)


I actually have a post about data-centricity in games, though from a very different angle. http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockou...


It seems like one large breakthrough in programming could simply be using the features of a language in a manner that best suits the problem. That's what I get from your blog post: design for what makes sense - not for what looks normal during a review. One thing I envy from LISP is that there seem to be few 'best practices' that ultimately make our applications harder to modify.


That should have a nice catch phrase "solidified into disaster by 'best practices'"?


I've been thinking a lot about such issues too; particularly the pain points I have when ramping up against new systems. What information is missing that leaves me with questions? Can code deliver something thorough enough to be maintainable as a single source of truth?

I think, the differences between reading and writing code are as big as sending and receiving packets. It's difficult to write code that extrapolates the base information in your head driving the decisions. Not only that, but you also have to juggle logic puzzles as you're doing it. And on the other side, you have to learn new domain languages (or type hierarchies), as well as what the program is supposed to do in the first place.

I think the idea of interacting with code as you build it is great, but how can we do that AND fix the information gap at the same time?


> definition of programming is all over the place

I agree.

For example, people do seem to assume that programming must involve, in some way, coding. Do we really need to code in some programming-language to be programming?

Changing security settings, for example, in a browser lead to quite different behaviors of the program. Isn't the user of the browser programming because they change the behavior of the program?

And this leads to....

> with focusing on data

If we focus on data, and hopefully better abstractions on how to manipulate that data, then wouldn't any user able to alter a program because they can adjust "settings" at almost any point within the program: in real time.

Wouldn't this then enable a lot more people to become programmers?

Anyway, just some thoughts.


The difference between setting parameters and programming is obviously that programming allows the creation of new functions. "Coding" is literally the act of translating a more-or-less formally specified program (that is, a set of planned actions) into a particular computer language. However, if being a programmer was only like being a translator, programming wouldn't be too hard for mere mortals. That's on the other part - the one that involves methods, logic, knowledge like the CAP theorem - they have problems with. The fact is, not every one can re-discover the bubble sort algorithm like we all here did when we were 8 or so. That's why we are programmers, and that's why they are not (and don't even want to be -- but they are nonetheless nice people and have other qualities, like having money). And these problems don't vanish if you switch from control-flow to data-flow or some bastardized-flow; they just change shape.


As some people have mentioned in this HN post, it is hard to define what programming means.

A program is the result of the automation of some process (system) that people have thought up (even things that couldn't exist before computers). Programming is the act of taking that process (system) and describing it in a computing device of some kind.

Programming currently requires some kind of mapping from the "real world" to the "computer world". The current mapping is done primarily with source code. So, it currently seems that people who are good at programming are good at mapping from the "real world" into the "computer world" via coding.

You seem to be making the point that some people are just good at programming because they can do things like "re-discover the bubble sort algorithm" or understand CAP theorem. These are very domain specific problems.

For people who are able to "re-discover inventory control management" they would do a great job of automating it (programming) if they had an easier way to map that process (system) to a computing device.

The ultimate goal (other than maybe AI) is a 1-to-1 mapping between a "real world" process (system) and a computing device that automates it.


I'm working on a system based on Bret Victor's "Programming in Principle" talk. I believe to achieve such a system, you need to find a way to add semantics to code, as well as the utilization of proofs in order to have enough constraints that your environment is seamlessly self-aware while at the same time extensible.

I'm curious what you mean by data though. Is it data in the "big data" sense? What I mean is, are we talking about gathering a lot of data on coding? My approach is based on that, anyway: lots of data on code with a number of different analyzers (static and dynamic) that allows for extraction of common idioms and constraints, while allowing for the system to more easily help the user.

Of course, there's no magic and a lot of times I reach dead-ends, and while I'm eager to have enough to show the world, progress has been kinda slow lately.

Looking forward for your talk, be sure to link here on HN.


I'm still not sure how you'll make LightTable work (well scale), hopefully it involves a new programming model to make the problem more well defined?

We had some great discussions at LIXD a couple of weeks ago, wish you could have been there. Everyone seems to be reinventing programming these days. We are definitely in competition to some extent. The race is on.


> hopefully it involves a new programming model to make the problem more well defined

That's exactly what I'm up to :)


Great. I'll send you something about my new programming model when it is written up decently, but it has to do with being able to modularly re-execute effectful parts of the program after their code or data dependencies have changed.


@ibdknox. I've started programming Clojure in LightTable the other day. How does programming differ from editing text? Can we use gestures to navigate and produce code? I'm working on visual languages which work well in certain domains, but fail when one needs precise inputs. To me the language constructs we use are inherently tied with the production of code (Emacs + LISP). There is a very good reason the guy who built Linux come up with a great versioning system. It is fair to say that Bret does not quite now yet what he is talking about, as he says himself. As if something big is going to happen and it is hard to say what exactly it is. I hope LightTable or something like it replaces Emacs&Vim in a couple of years. I think that being able to code in the browser will turn out to be unbelievably important (although it looks not that useful today).


I always kick myself for not having made it to Strange Loop when I lived in St. Louis.

I'll (hopefully) be looking forward to a vid of it on infoq at some point. :)


FWIW, after the talk I'll likely end up putting up a big thing about it on the internet somewhere. :)


This made me feel like you were going to write the content and then randomly post it in the comments section of someone's blog on clocks as garden decorations or something like that.

Either way, looking forward too it!


Can you attend Splash in October? I know academic conferences are probably not your pace, but there will be some good talks and it might be worth your time for networking with some of the academic PL types.


I'll look into it - that's likely to be a very busy time so I can't commit right at the moment.


"tenets"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: