I don't quite understand this distinction between liveness and hotswapping, as all the examples of liveness that I've seen involve hotswapping code that causes graphic or audio side effects, and clearly that sort of thing has very real practical limitations.
There is one demo in Bret Victors' IoP talk where he is live programming a sorting algorithm and something non-graphical is visualized (in this case, control flow and local variable states). The hotswapping really isn't the focus at all; its the live feedback that is important.
You might be fighting a battle that's already lost. For most people, live programming is having a running system with a REPL or equivalent attached so that you can run & update code inside that system. For example a running web server with some mechanism to add a new request handler while the server is running. Or a program that's playing programmatic music or some kind of graphical demonstration where you can add and redefine functions to change the music/graphics. This kind of "live programming" is basically the same as hot swapping, but some people also associate it explicitly with a live stage performance (e.g. music or visual).
With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it. Perhaps it does this by just running the code and showing the output, perhaps it displays the execution trace in some way, perhaps it displays a visualization of a data structure over time. In a way, live feedback on static type errors could also be considered a limited form of live programming. Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).
Even with this second notion of live programming the question of updating running code does not go away. If you are developing a game, you may want to do live programming by running the game next to the code and have that be updated whenever the code changes. But a game has state, and how do you transport that state to the next version of the code? Hot swapping code by blindly mutating a function pointer in the running game is obviously not the answer. That's just a hack that works some of the time: it doesn't work when updating code while the running game is still in the middle of something, and it corrupts the state when there is a bug in the code, and it doesn't work at all when data structure structure changes. The perspective "how to transport the state to the next version of the code" is much better than "how to I shove new code into the running system with the old state". The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.
> You might be fighting a battle that's already lost.
REPLs and interactive programming existed long before the "live programming" experience was defined (by Hancock), and I only use the term to describe what Bret was showing off in his IoP talk as well as the experience the Light Table people seem to be striving for. I might be a bit pedantic, but there are plenty of other terms to describe the older less live experiences! Hot swapping is just some mechanism to achieve some undefined experience; "I changed my code while my program is running" is vague enough. It typically has to be coupled with some other refresh mechanism (e.g. stack unwinding) to be useful, and even then it typically doesn't do more than it advertises (func pointer f was pointing to c_0 and now points to c_1).
Now live coding...is completely different and has an independent origin from live programming. Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!
> With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it.
Its coding with a water hose vs. a bow and arrow. Debugging is not a spearate experience and happens continuously while editing, if you can't provide enough continuous feedback to get rid of a separate debugging phase, then its not really live programming.
> Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).
But the new term was coopted to describe an old experience! Hancock's definition is unique (no one used this term before 2003), fairly complete, and its very compatible with what Bret Victor was showing off in his IoP work. Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!
> But a game has state, and how do you transport that state to the next version of the code?
Today this is framework specific, and all major game engines have a way of doing this as they want to allow the designers to script levels in real time without losing their context. It doesn't even require language support necessarily, but its not something you ever get "for free," its something that is baked explicitly into your framework.
> The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.
No one has figured out how to yet come up with an expressive general programming model that achieves this efficiently, but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved! Lots of work still to do...just don't take away my term please!
> Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!
Yes that's what I mean! A tiny difference in the terms we use: live coding vs live programming. That's why it's confusing to people.
> Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!
Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback. Conventional debugging is pressing a button to run your code and see what the result is. Instead of just displaying the result, you could display the entire execution trace (time traveling debuggers). You could write unit tests and display which passed and which failed. You could output some visualization of some data structure in the program. For a game you could output a series of frames overlaid on each other (like Bret Victor does). Then you have type checking, for numerical code sensitivity to floating point bit width, performance profiling, etc. This is all about giving different kinds of feedback. Continuous feedback is about getting feedback without having to press a button. Classical live programming is running the program continuously and continuously displaying its output. This is the continuous feedback version of ordinary debugging. Automated background unit test runners are the continuous version of unit testing. In the same way you have a continuous version of the other debugging techniques. Both continuous feedback and rich feedback are very valuable, and although they are stronger together they are separate concepts. Perhaps it would be a good idea to have separate words for them, that would certainly greatly clarify "live programming".
> but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved!
Yes, this is robust to internal data structure changes but no longer robust to UI changes. Viewing a program as a series of event stream transformers and time varying values as in FRP may help a bit. At the lowest level you have a stream of mouse clicks on pixel (x,y) and keyboard events with keycode k. Then the UI toolkit transforms that stream of events to event streams on UI elements: click on button "delete", text input to textfield "email address". Then that gets transformed to logical operations and data: delete_address_book_entry(...) and email_address. Then that gets transformed to the complete time varying high level state of the entire program (address_book_database). You can try to transport the state on each of the different levels, but in the end I think a completely automated solution is impossible. You are going to need domain specific info on how to do schema migration in the general case. For live programming that may not be worth it because you can just start over with a fresh state, but for things like web site databases you don't want to lose data so you have to manually migrate. [tangent: Currently there are a lot of ad-hoc solutions to this e.g. never remove an attribute from your data model, and when you add new attributes make sure all code works even if that attribute is missing. Reddit even goes so far as to structure its entire database as "key,attribute,value" triples instead of using a structured schema so that the schema never needs to change, but of course this just moves the problem from the database into the code that talks to the database. A principled approach where you write an explicit function to migrate your data from schema version n to schema version n+1 would work better. That migration function takes the entire state/database with schema n as input and produces an entire new state/database with schema n+1. When the state/database is large this would take too long to do it in one pass, but with laziness that can be done on-demand.]
You don't need to limit yourself to running one instance of the program. You could record multiple input sequences representing multiple testing scenarios, and display the results of running each of them, or even display each of them being continuously performed so that you can see all the steps in between. In any case as you say there is lots of work still to be done.
> Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback.
Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.
I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).
As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.
But maybe take a look at our UIST déjà Vu paper [1]: here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.
Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
> Again if we go back to Hancock's thesis, it's all there!
Yes, the problem is not with the definition of the term, but with the term "live programming" itself! It is too vague and can apply to too many concepts, and hence we're seeing people use it and interpret it for many different concepts. Nobody will go read a thesis to learn what a term means. But then again "object oriented programming" is vague as well. The notion of "steady frame" does seem oddly domain specific. In the words of that thesis: water hosing your way towards the correct floating point cutoff value or towards the value of a parameter in a formula that produces an aesthetically pleasing result works great, but I'm not convinced that you can "water hose" your way to a correct sorting algorithm for example. Perhaps I have misunderstood what he meant though.
> A "best" effort with reset as a back up is more usable.
Yeah, I agree. I think the same primitives that can be used for building good explicit state migration tools, like saving the entire state and recording input sequences or recording and replaying higher level internal program events, can also be used for building good custom live programming experiences. So they are not two entirely disjoint problems.
> But maybe take a look at our UIST déjà Vu paper [1]
That's very interesting and looks like an area where live programming can work particularly well! A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream. Of course LightTable is trying to do some of that, but while it started out in a quite exciting way they seem to be going back to being a traditional editor more and more (albeit extensible).
> One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
> A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream.
That's exactly what we're trying to do with LT, see my "The Future is Specific" post [1].
> they seem to be going back to being a traditional editor more and more (albeit extensible)
This is a necessary detour as we build a foundation that actually works and allows us to really make the more interesting stuff. If we can't even deal with files, what good are we going to be at dealing with the much more complicated scenario of groups of portions of files? :)
> Yes, the problem is not with the definition of the term, but with the term "live programming" itself!
True. But I think the word has worked well until recently.
> The notion of "steady frame" does seem oddly domain specific.
Not really, but please wait for a better explanation until my next paper. One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now (that the UI represented by the steady frame is probably not the GUI that is used be an end user).
Their work doesn't seem to scale yet (all examples seem to be small algorithmic functions) while I'm already writing complete programs, compilers even, with my own methods, which are based more on invalidate/recompute rather than computing exact repair functions. I'll be able to relate to this work better when they start dealing with bigger programs and state.
"Sam Aaron promotes the benefits of Live Programming using interactive editors, REPL sessions, real-time visuals and sound, live documentation and on-the-fly-compilation." :D
> One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now
Yea, this interpretation of 'steady frame' is fully general I think: the ability to compare feedback of version n with feedback of version n+1 without getting lost. My interpretation was more specific because of the water hose vs bow and arrow analogy: continuously twiddling knobs until you get the result you want vs discrete aim-and-shoot. For example picking the color of a UI widget with a continuous slider vs entering the rgb value and reloading. Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
> which are based more on invalidate/recompute rather than computing exact repair functions
You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute. For example if you have a List<Changeable<T>> then each item in the list can be repaired independently, if you have Changeable<List<T>> the whole list will be recomputed. Although you probably want to automatically find the right granularity rather than force the user to specify it?
Ya, I saw it to. I didn't see the talk though, but I expect it to be more of the same promotion of live coding as somehow actually being live programming (programming is like playing music! Ya...).
> Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
A sorting algorithm can be fudged as a continuous function. But then here continuous means "continuous feedback", not "data with continuous values." The point is not that the code can be manipulated via knob, but that as I edit the code (usually with discrete keystrokes and edits), I can observe the results of those edits continuously.
> You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute.
I'll have to look at this work more closely, the fact that we need custom repair functions at all bother me (repair should just be defined simply as undo-replay). The granularity of memoization is an issue that has to be under programmer control, I think.
You don't need custom repair functions, the default is undo-replay, but in some cases it helps performance to have custom repair functions. For example suppose you have a changeable list, and a changeable multiset (a set that also keeps a count for each element). Now you do thelist.tomultiset(). If the list changes, then the multiset has to change as well. If you applied their framework all the way down, this might be reasonably efficient. But with custom repair functions it can be more efficient: if an element in the list gets added, just increment the count of that element in the multiset. If an element gets deleted, decrement the count of that element in the multiset.
I feel like we are turning hackernews into lambda-the-ultimate :)
I wrote this really bad unpublished paper once [1] that described the abstracting-over-space problem as a dual and complement of the abstracting-over-time problem. It turns out, for simple scalar (non-list) signal (reactive) values, the best thing to do was to simply recompute. However, for non-scalar signals (lists and sets), life gets much more complicated: it makes no sense to rebuild an entire UI table whenever one row is added or removed, and so we want change notifications that tell us what elements have been added and removed. However, I've changed my mind since: it is actually not bad to redo an entire table just to add or remove a row, as long as you can reuse the old row objects for persisting element. If my UI get's too big, I can create sub-components that memoize renderings unaffected by the change (basically partial aggregation).
Now how does that relate to theList.toMultiSet example? Well, the implementation of toMultiSet could be reduced to partially aggregated pieces very easily (many computations can actually), which could then be recombined in much the same way as rendering my UI! Yes, the solution that decrements/increments the count on a specific insertion/deletion is going to be "better", but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.
I still need to understand their work better, but I approached my work from a direction opposite of algorithms (FRP signals, imperative constraints). I have a lot of catching up to do.
> but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.
Yes, that's exactly what you get if you do not implement a custom traceable data type (their terminology for a data type that supports repair) provided you write your code in such a way that the memoization is effective. Note that traceable data types do not necessarily need to be compound data structures, it can be e.g. an integer as well. E.g. summing a list of integers to an integer, now if one of the integers in that list gets changed, you do not need to recompute the entire sum, or even a logarithmically sized part of it: you can just subtract the original int and add the new int back in.