> Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback.
Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.
I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).
As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.
But maybe take a look at our UIST déjà Vu paper [1]: here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.
Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
> Again if we go back to Hancock's thesis, it's all there!
Yes, the problem is not with the definition of the term, but with the term "live programming" itself! It is too vague and can apply to too many concepts, and hence we're seeing people use it and interpret it for many different concepts. Nobody will go read a thesis to learn what a term means. But then again "object oriented programming" is vague as well. The notion of "steady frame" does seem oddly domain specific. In the words of that thesis: water hosing your way towards the correct floating point cutoff value or towards the value of a parameter in a formula that produces an aesthetically pleasing result works great, but I'm not convinced that you can "water hose" your way to a correct sorting algorithm for example. Perhaps I have misunderstood what he meant though.
> A "best" effort with reset as a back up is more usable.
Yeah, I agree. I think the same primitives that can be used for building good explicit state migration tools, like saving the entire state and recording input sequences or recording and replaying higher level internal program events, can also be used for building good custom live programming experiences. So they are not two entirely disjoint problems.
> But maybe take a look at our UIST déjà Vu paper [1]
That's very interesting and looks like an area where live programming can work particularly well! A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream. Of course LightTable is trying to do some of that, but while it started out in a quite exciting way they seem to be going back to being a traditional editor more and more (albeit extensible).
> One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
> A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream.
That's exactly what we're trying to do with LT, see my "The Future is Specific" post [1].
> they seem to be going back to being a traditional editor more and more (albeit extensible)
This is a necessary detour as we build a foundation that actually works and allows us to really make the more interesting stuff. If we can't even deal with files, what good are we going to be at dealing with the much more complicated scenario of groups of portions of files? :)
> Yes, the problem is not with the definition of the term, but with the term "live programming" itself!
True. But I think the word has worked well until recently.
> The notion of "steady frame" does seem oddly domain specific.
Not really, but please wait for a better explanation until my next paper. One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now (that the UI represented by the steady frame is probably not the GUI that is used be an end user).
Their work doesn't seem to scale yet (all examples seem to be small algorithmic functions) while I'm already writing complete programs, compilers even, with my own methods, which are based more on invalidate/recompute rather than computing exact repair functions. I'll be able to relate to this work better when they start dealing with bigger programs and state.
"Sam Aaron promotes the benefits of Live Programming using interactive editors, REPL sessions, real-time visuals and sound, live documentation and on-the-fly-compilation." :D
> One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now
Yea, this interpretation of 'steady frame' is fully general I think: the ability to compare feedback of version n with feedback of version n+1 without getting lost. My interpretation was more specific because of the water hose vs bow and arrow analogy: continuously twiddling knobs until you get the result you want vs discrete aim-and-shoot. For example picking the color of a UI widget with a continuous slider vs entering the rgb value and reloading. Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
> which are based more on invalidate/recompute rather than computing exact repair functions
You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute. For example if you have a List<Changeable<T>> then each item in the list can be repaired independently, if you have Changeable<List<T>> the whole list will be recomputed. Although you probably want to automatically find the right granularity rather than force the user to specify it?
Ya, I saw it to. I didn't see the talk though, but I expect it to be more of the same promotion of live coding as somehow actually being live programming (programming is like playing music! Ya...).
> Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
A sorting algorithm can be fudged as a continuous function. But then here continuous means "continuous feedback", not "data with continuous values." The point is not that the code can be manipulated via knob, but that as I edit the code (usually with discrete keystrokes and edits), I can observe the results of those edits continuously.
> You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute.
I'll have to look at this work more closely, the fact that we need custom repair functions at all bother me (repair should just be defined simply as undo-replay). The granularity of memoization is an issue that has to be under programmer control, I think.
You don't need custom repair functions, the default is undo-replay, but in some cases it helps performance to have custom repair functions. For example suppose you have a changeable list, and a changeable multiset (a set that also keeps a count for each element). Now you do thelist.tomultiset(). If the list changes, then the multiset has to change as well. If you applied their framework all the way down, this might be reasonably efficient. But with custom repair functions it can be more efficient: if an element in the list gets added, just increment the count of that element in the multiset. If an element gets deleted, decrement the count of that element in the multiset.
I feel like we are turning hackernews into lambda-the-ultimate :)
I wrote this really bad unpublished paper once [1] that described the abstracting-over-space problem as a dual and complement of the abstracting-over-time problem. It turns out, for simple scalar (non-list) signal (reactive) values, the best thing to do was to simply recompute. However, for non-scalar signals (lists and sets), life gets much more complicated: it makes no sense to rebuild an entire UI table whenever one row is added or removed, and so we want change notifications that tell us what elements have been added and removed. However, I've changed my mind since: it is actually not bad to redo an entire table just to add or remove a row, as long as you can reuse the old row objects for persisting element. If my UI get's too big, I can create sub-components that memoize renderings unaffected by the change (basically partial aggregation).
Now how does that relate to theList.toMultiSet example? Well, the implementation of toMultiSet could be reduced to partially aggregated pieces very easily (many computations can actually), which could then be recombined in much the same way as rendering my UI! Yes, the solution that decrements/increments the count on a specific insertion/deletion is going to be "better", but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.
I still need to understand their work better, but I approached my work from a direction opposite of algorithms (FRP signals, imperative constraints). I have a lot of catching up to do.
> but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.
Yes, that's exactly what you get if you do not implement a custom traceable data type (their terminology for a data type that supports repair) provided you write your code in such a way that the memoization is effective. Note that traceable data types do not necessarily need to be compound data structures, it can be e.g. an integer as well. E.g. summing a list of integers to an integer, now if one of the integers in that list gets changed, you do not need to recompute the entire sum, or even a logarithmically sized part of it: you can just subtract the original int and add the new int back in.
Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.
I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).
As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.
But maybe take a look at our UIST déjà Vu paper [1]: here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.
Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...