Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unison: a next-generation programming platform (unisonweb.org)
308 points by adamwk on May 8, 2015 | hide | past | favorite | 128 comments


Wow, author here, was not expecting this to end up here! This project is very much in development, it's not close to a finished product and I hope I didn't give that impression. I created the site so there'd be a space other than my blog to share updates about progress and so on. If everything I wrote sounds like nonsense, check back in 6 months. I am hoping things will be much further along, and more concrete by then. It will get there.

I definitely appreciate the feedback from folks about how it is unclear what the project even is from the descriptions. I am still trying to figure out the best way to explain it succinctly (so far haven't been too successful). Having a better demo to show would definitely go a long way.

This might help a little bit - here's a video and some explanation of the expression editor from my blog: http://pchiusano.github.io/2015-03-17/unison-update5.html The platform is going to be more than just the editor, but that might provide a feel for what the editing experience could be like. Obviously many details and polish to be worked out.


Ok, so I spent about an hour casually reading the posts, watching the demo videos and glancing at the code. I agree with your premise and I like a lot of the implementation so far. It's great to see more innovation and research in this area alongside Light Table, Eve (which seems to be going in a similar direction with spreadsheets), NoFlo and others.

I'm curious about the goals of the project. If the goal is to replace an existing language and editor pairing then it's going to be an uphill battle (see previously mentioned related projects). If it's to be supplemental with existing development tools then what use cases are you targeting?

Adding this as an optional layer to an existing editor might help work out some of the ergonomics. Performance issues aside, building on top of Atom might be a good choice since it's mostly web based like the current UI in the demos. Similarly, a custom kernel or cell type in ipython might get the ideas into daily use.


Where are the demo videos? Couldn't find those and I'd be very interested to see them.


Here's the most recent video that I could find: http://pchiusano.github.io/2015-03-17/unison-update5.html

A few of the other posts have videos too: http://pchiusano.github.io/unison/


I think the time is ripe for this kind of software: I'm also working on something very similar. I've put a lot more work into the editing/layout features than the portion for generating the AST—but my scheme for doing it is nearly identical to what you have in your video. A demo video of my editor is here: http://symbolflux.com/projects/tiledtext

When I was working on it before my approach was to use grammatical knowledge of traditional, text-based languages to generate a selection of grammatically valid constructs for your current 'cursor' position; only in the last few weeks did I hit on the idea to do it 'syntax-free' (in my terminology).

My thinking on why 'syntax-free' is The Way to go: if you look at what someone's trying to do when writing a program, it's really: (1) select language construct (2) select parameters as a 'configuration' of selected construct. Text is one way to specify those things, but a far more effective system addresses the fact that that's all you're doing directly, and allows views to be attached after the fact (even if something that's primarily text ends up being the optimal view!).

In case it's helpful to you for communicating the idea to people, I often described the benefit of this kind of system to people as automatically allowing someone who only knows how to read a language, to also be able to write in it—no more remembering the details of how to express construct X in language Y.


Awesome project. Ignore the haters. I think this (https://pchiusano.github.io/2015-04-23/unison-update7.html#r...) was the link that I found most helpful to get an idea of what you're doing, of the random set of pages that I flipped through.


The things you're describing are EERILY similar to some stuff I've been conceptualizing and thinking about recently, as inspired by the VPRI people's work/OMeta/Shen. I'd just remark that...

"The program is a UI, and UI interaction is programming. Unison panels take the idea of spreadsheets, which blur the distinction between UI interaction and programming, to its logical conclusion."

...this blurring between programming and interaction (which I've also thought of) probably offers a potentially viable path to much more powerful voice control of devices, since programming APIs serving as user interfaces can now also (besides graphical UI) implicitly define strict grammars for structured voice input. That's the one thing I thought of that I didn't see mentioned there.


One term I didn't see on the page, but you should look into examples of if you already haven't, is "structure editor". That's the notion of a source code editor which doesn't allow free-form text editing, but creation of well-formed language semantics.


Also, "tree editor".


As someone with a "similar" platform, I too am rather surprised this end up on HN frontpage ;(

Few questions:

* how is your editor different from MPS (except being browser based)?

* how does your project compares to an implementation on top of language workbenches?

* do you think your assumption about time spent on a project is mostly Haskell related or is that for some specific types of projects?

* are you thinking about implementing some framework of your own or do you have some other in mind?

* do you plan on integrating with existing libraries or write your own implementation?

And comments:

* for me good IDE feels like semantic editor. I'm inclined to believe that text based editor can have all the good features of semantic editor and avoid most of the bad ones.

* while writing simple structures one time and reusing them vertically in a project is useful, it's doubtful that it's worth while. It becomes worth while only if you have project in multiple languages/technologies. But mostly if you can do refactoring automatically (even database schema migrations).

* try to explain your project better, this should help you with narrowing it's scope. Most of the people will try to fit it into some category they are familiar with so emphasize the distinctions.

Good luck! ;)


Here's the link to MPS for those who don't already know what it is: https://www.jetbrains.com/mps/


Can I summarize this with: immutable lisp with 2-way bindings to m-expressions? Sorry I don't mean to trivialize or anything, but me and my friends have been thinking of something eerily on these lines. I think it's one of those ideas whose time has come. After reading your descriptions I think I have a better idea in my mind of what I was grasping at earlier. We were pretty inspired by Elm and you seem to be too.


Hi There. I'm just here to get in line for a "I've been working on something similar..." comment. Nice work!


Same here. Well, s/working on/pondering and failing at/ for me :P

My most recent attempt: https://github.com/pshc/archipelago


This is extremely relevant to my interests and something I was pondering beginning work on. It makes me very glad (if not feeling a bit unoriginal) to hear that so many other people had plans to do similar. :)


> Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.

This is not my experience at all. I've spent maybe an hour or two in the last few months on parsing, serialization, and persistence. For the work I do, these are all solved problems.

> Another 25% is spent on explicit networking. We don’t merely specify that a value must be sent from one node to another, we also specify how in exhaustive detail.

Honestly I'm not sure what the author means by this. If I have a value others might want to know about, I just trigger an event. There is no "exhaustive detail' going on. (Plus events are easy to grep).

>Somewhere in between all this plumbing code is a tiny amount of interesting, pure computation, which takes up the remaining 5% of developer time. And there’s very little reuse of that 5% across applications, because every app is wrapped in a different 95% of cruft and the useful logic is often difficult to separate!

Boiler plate code used to be an issue, but the languages I still use have ways around it. (macros, first class functions, extending, etc). We build modules and libraries and controls, all reusable code that lets me spend most of my time being bored in meetings.


>> Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.

> This is not my experience at all.

Yeah, unless you're tinkering around in a side project just to learn something, don't build your own JSON parser or writer.

I've spent WAY more time thinking about how to structure my data for serialization than how to serialize it. For the vast majority of use cases, serialization is such a solved problem that if there isn't an obvious way to proceed, you're probably doing it wrong.

>> Another 25% is spent on explicit networking. We don’t merely specify that a value must be sent from one node to another, we also specify how in exhaustive detail.

> Honestly I'm not sure what the author means by this.

Doing low-level socket programming? Maybe? But in the grand scheme of things, that's probably a fairly specialized use case.


I suspect that "how to structure my data for serialization" is what the author means by the 70% of time spent on parsing, serialization, and persistence. I hadn't heard of Unison before, but I recognize the author's name from Lambda: The Ultimate, and I suspect that what he has in mind is that any value within the Unison language can appear on any Unison node, transparently. Instead of picking out exactly which fields you need and then creating new JSONObjects or protobufs for them, just send the whole variable over.

I also suspect (being a language design geek, and also having worked with some very large distributed systems) that the reason why this is seductive is also why it's unworkable. I think I probably do spend close to 70% of my time dealing with networking and data formats (and yes, I use off-the-shelf serialization formats and networking protocols), but that's because a watch is very different from a phone which is very different from a persistent messaging server which is different from a webpage, and Bluetooth is very different from cell networks which are very different from 10G-Ethernet in a DC. Try to dump your server data structures directly to your customer's cell phone and you're about to have a lot of performance and security problems.


Maybe the author started this project some years ago, before parsing and network libraries were common? Because if you don't have libraries, he's right.

Starting from scratch can yield radically better solutions than how tech/market happened to evolve.


It would have to be extremely old for that to be true. All the problems he mentioned have had some form of solution for decades. Some uses cases needed significant changes in those solutions (ex: need for NoSQL DBs) but most development have stayed in a zone where the available patterns existed for the things he mentioned.


Perhaps, but the "solutions", when put together in a single project, probably resemble the "30 quicksorts in a single binary" syndrome. Lots of code doing conceptually the same things.


hmm, JSON is only a decade and a half old, so not "decades" for that particular one. And it's only really taken off within the last few years.

You're right about it being conceptually solved for many decades, but we here developers like to reinvent everything every decade or so.

For example, back when XML was gaining in popularity, around 2000, there were new parsers/serializing libraries launched all over the place - even standards like SAX and DOM. Many people rolled their own; many had to.


It's not like JSON is the first serialization format ever. (I still have XDR & CDR nightmares, and that was the early 90's)

So, yes, decades. Not necessarily solved well, but yes, people shipped data over the network before JS ;)


If you're doing low-level anything, it's usually because you're interested in the "exhaustive detail". This is quite confusing.

> Also as a result, Unison has a simple story for serialization and sharing of arbitrary terms, including functions. Two Unison nodes may freely exchange data and functions—when sending a value, each states the set of hashes that value depends on, and the receiving node requests transmission of any hashes it doesn’t already know about. Using nameless, content-based hashes for references sidesteps complexities that arise due to the possibility that sender and receiver may each have different notions of what a particular symbol means (because they have different versions of some libraries, say).

Yeah, I can see how this is going to solve the problem of persistence and networking once and for all. Or not. As for sidestepping the problem of having different versions of the same library on different nodes and serializing data, that's going to work fine until the first rename, or when node A with version 1 of the library sends a data structure with half the fields understood by version 2 of the library on node B...


He addresses this issue - see the part about using hashes for everything so neither names nor lib versions matter, only the contents do, identified by hashes.


I didn't read it as getting your data serialized for storage, I read it as serializing your data between different libraries of a program. You know, putting the arguments to a function the right way round, changing the structure so that library A can talk to library B. I do spend a fair amount of time doing that, and it would be solved by a more semantic approach.


That's not serialization.


I agree on parsing/serialization/persistence; but when interacting with a new JSON format, although the parsing is free (via a library), you still have to interpret/bind/integrate it into your own app's data structures. Often this is implicit in looking up values in the JSON, then calling something with them - but there's still that intermediate layer. If you're often dealing with new JSON formats (and I think most developers aren't), this integration/gluing would take up a large proportion of development time.


Indeed. I think this is the key takeaway from that whole passage:

> These numbers are made up, of course


I have no idea what world the author comes from, but it doesn't remind me of my day to day development efforts using Python.


Followed by:

> These numbers are made up, of course...


Here are some more blog posts about the project: http://pchiusano.github.io/unison/

As far as I can tell, the goal is to put interaction with a UI on the same level as editing a program. To this end, Unison expressions, values and UI elements all live in a single syntax tree which can be edited in a structured way.

A demo would go a long way toward making this all clearer. Even some text describing what it would be like to use the finished product would help.


Interesting to see he's also the author of the book Functional Programming in Scala.


Doesn't this go against the MVC grain -- UI / model separation of concerns?


Well, it just means you only have one V and one C for all the Ms in the universe.


On the surface it sounds interesting, but I don't really buy the idea that programming is terribly limited by the quality of text editors. Even "basic" editors like vim and emacs can almost instantly catch most syntax problems, and IDEs like Eclipse and Visual Studio are great about complaining if you make a mistake.

In my experience, these days relatively few problems come from language syntax errors ... most problems come from mistakes in logic, poor systems integration, or incorrect validation of input data. Not sure how Unison will solve those challenges, but to be fair maybe that is out of scope.

However it is hard to slog through the marketing, and in the end its just not clear from the writeup what sort of programs one would write in Unison, so it doesn't really inspire me to spend any more time trying to learn about it.


One of the mantras of functional programming is get your data structure right, and everything else falls into place. As an experienced functional programmer, I see this in practice every time I program--even a slightly "wrong" type pollutes the code dramatically.

The idea then with structure/tree/semantic editors is two-fold.

First, text is obviously the wrong the wrong data structure for programs, so even if people manage to provide fairly good support with the "modern IDE", the effort it takes is staggering. And still details like files, character encodings, god damn tabs vs. spaces, and terminals leak through. One can't help but wonder how much farther they would get were their efforts directed more efficiently. Look at this https://github.com/yinwang0/ydiff for example, blows text diff out of the water.

Second the idea is maybe text is the "wrong data structure" for programmers too, not just tool-smiths. I admit there are bunch more psychological/neurological aspects that make the answer less obvious. But I see no reason why not to keep an open mind until a polished tree editor exists for empirical testing. Even if text is in someways better, the improved functionality a tree editor offers could offset that dramatically.


Functional programming kind of screws itself when it comes to "implementing" modern good IDEs; the aversion to state makes it difficult to implement good tree-incremental error resistant parsing and type checking that is usable in a language aware editor.

I've developed plenty of advanced IDE infrastructure; e.g. see

http://research.microsoft.com/en-us/people/smcdirm/managedti... and http://research.microsoft.com/en-us/projects/liveprogramming...

The trick was abondoning FRP-style declarative state abstractions (which was my previous research topic) and moving onto something that could manage state in more flexible ways (as we say, by managing times and side effects instead of avoiding them). And once you nailed the state problem, incremental parsing, type checking, and rich editing are actually easy problems.


Yeah, incremental parsing is quite anti-functional--but tree editors don't need to parse.

Type checking I am less convinced. With syntax-directed methods one should be able to hash-n-cache each sub expression and get incremental for free. In the case of syntax directed + passing hints down the tree, just keep track of the hints too. [...let's not talk about Damas-Milner. :)]

Then, which I think Unison goes for, there is the approach where one never has untyped syntax to begin with. Not sure how annoying this is with normal programming, but should be great for theorem proving, which probably where tree editors will shine the most anyways.


Coupled with type inference, which you will need if you use a structured editor or not, then incremental type checking is indeed very non functional.


We agree incremental use of global type inference algorithms is non-function.

But I am also saying you don't need to do that. Certainly don't need type-inference in the case where the ASTs are typed by construction. And the syntax directed algorithms can also infer somewhat.

You can also alternatively cache the result of global inference, which sucks for refactoring but preserves purity.


You will need type inference if you want any sort of non local communication...e.g. variables that hold different values or functions. But ya, if your language is just pure trees with no names or symbol tabkes, you don't need it; not very useful, however.


Can you give some more detail about what you replaced FRP with? Sounds interesting.


I replaced it with glitch. It was actually something I started developing for the scala IDE a long time ago that had to integrate easily with scalac, so it had to handle the limited set of effects performed by it (they no longer use this in the scala plugin, but initial results were promising, and being very incremental dealt with scalac's performance problems).

I've refined the technique over the last 7 years, you can read about it in a conference paper:

http://research.microsoft.com/pubs/211297/onward14.pdf

You can think of Glitch as being like React with dependency tracing (no world diffing) and support for state (effects are logged, must be commutative to support replay, and are rolled back when no longer executed by a replay).


It is very limited, I am a vim guy, it has nothing to do with vim. Simply we are reinventing the wheel every time and that is the biggest issue.


The editor (or whatever it's called) promises to only allow only well-typed programs. Not just syntactically correct programs.


Eclipse, Visual Studio, and all the Haskell IDEs catch type errors on the fly and help you fix them.


Yes, but you have to finish typing the syntax first, and the IDE that is constantly parsing your code file is always going to report errors as the code goes through a broken state while you're typing. That's one thing Unison is trying to fix.


> That's one thing Unison is trying to fix. Is that actually a problem? If you're never allowed to input incomplete/broken code, then the way you program is very limited. Often I like to start writing a function, get to the point where I realize I need another function, write that one, and come back to the first function.

In an editor where only working code can be input, you can't do this style of programming, which almost rules out exploratory programming. You have to more or less know what you're doing from the start.


This sounds implausible. How can you run your intermediate program if it's missing a function? I don't see that accepting only well-typed programs changes anything, and furthermore it applies to all typed languages not just ones that enforce well-typedness by construction.


You can't run it. The point is that if I'm trying to figure out how to implement something, I can start writing down my ideas and adjust as I go. If I have to write valid code the whole time, my ability to do this is very limited.

If you already know what code you want to write, this makes sense. But how often does that happen? I suppose you could work out your ideas on paper first, but that eliminates the much of the value modern editors provide.


> If you already know what code you want to write, this makes sense. But how often does that happen? I suppose you could work out your ideas on paper first, but that eliminates the much of the value modern editors provide.

What would be ironic is if this semantic editing would force us to sketch our functions/modules/program with pen and paper before we are able to actually input them into this semantic editor. An editor that tries to go beyond the "archaic" textual interface, and ends up forcing you to use the even more "archaic" pen and paper.


The difference is that this editor will supposedly not even allow the code to be in a state of type errors. You literally can not construct a program that is not well-typed. Don't ask me how that editing experience is supposed to work, or if it's a good idea.


Actually, that sounds terrible. I already am sometimes annoyed when I try to first use a variable, and then go back and decide where should I declare it - visual studio doesn't like that - literally not being able to go through a broken state would be annoying.


If this video < http://pchiusano.github.io/2015-03-17/unison-update5.html > is an example of this, then yes, it does look annoying and inefficient. Now I'm not sure if I understand exactly what he's doing here, but it looks like something along those lines: rather than inputting what he wants directly via the keyboard, he has to choose a pattern (like variable-operator-variable) from a drop-down list, and then tab between the fields to fill in the pattern.

A poor analogy might be the difference between speaking words verbally to form a sentence vs. shuffling through a stack of index cards to find the right words to arrange on a table to form a sentence. The latter would be useful if you were just learning a language, but once you know how to speak it, that would be an enormous waste of time.


Agreed; editor "slop" is important for UX. A structure editor needs to model a text editor, with all its indeterminacy, to be usable.


I'm annoyed just by simple error-messages (like in Eclipse) while I'm editing in a "broke" state. I know that this class isn't valid Java yet, Eclipse; shut up! :)


Unison is also already the name of a file sync program written in OCaml, a team collaboration service, a proprietary newsreader and apparently a security management (access control/intrusion detection/firewall/etc) platform.

The file sync is probably the most popular, and the one that first came to mind.

I realize that the exclusive namespace for software titles is shrinking, but I think that when you already have this much ambiguity, it's not a wise choice to pick generic dictionary word names. Putting some thought into naming things does pay off.


It's also a category of inventions, "Unison devices", which refer to ways to keep two things in sync. This was a huge problem from about 1850 to 1980 or so. Edison's stock ticker was one of the first big successes, but it wasn't really nailed until the invention of the phase-locked loop. (Anyone remember "horizontal hold" knobs?)


Had this same thought, there's a lot of things named unison, including some really popular ones.


It was also the name of a popular general midi soundfont :)


And it's also the name of the second biggest trade union in the UK with 1.3 Million members.


I came here to complain about calling things 'next-generation languages' when it's obvious that a next-gen language would have to have a purely functional AST.

Turns out Unison has that; so you've got my interest!

I agree with the other comments though that these (amazing) salient details could be communicated a bit more succinctly ;)

P.S. I have a presentation up at https://speakerdeck.com/mtrimpe/graphel-the-meaning-of-an-im... which lists some of the real-world benefits of such a language... feel free to use it for inspiration.


Alright, I made an honest attempt to read through all of it but I still have a fuzzy understanding of what you are trying to achieve with this project. I understand our current development tools might not be the best, but if I understand this right you are essentially making a framework into a language with an editor with said language? I don't see that moving away from the model we have now, simply moving things lower in the stack. Maybe Im just not understanding.


How does this differ from other next-generation web platforms like Opa?

http://opalang.org/

Also, why does this need to re-use the name of a next-generation file synchronization tool?

http://www.cis.upenn.edu/~bcpierce/unison/


How does it improve on rsync?


Unison is bidirectional.


While the ideas here are surely thought provoking - the most important factor seems to be the User Experience of a Semantic Editor. How will a user perceive this constrained environment ?

From what I can tell - users love freedom and computers hate them. Why else do you think people still stick to text editors ? A plain text editor is probably the most unconstrained environment there is. So, what is the best way to find some sanity in between?

If you ask me, I would say that the main way to transform or change programming would be to remove as much constraint as we possibly can from the user. Let me crap all over the place - and you as the environment make sense out of it. That is why we are building Dhi ( http://www.dhi.io ) - An AI Assistant to help you build user interfaces. We are building it in such a way that the user has complete freedom to say whatever he wants to say and we ( perhaps sometimes even with the help of the user! ) plan to make sense of it. THAT is the dream.


A few mockups or screenshots would have helped me immensely to understand this "next gen text-based-programming killer", that is ironically described exclusively in plaintext. In my head, ideas range from visual programming language to programmable spreadsheet, with no clear picture of even how the language looks.

I'm so confused...


You see them at the author's blog https://pchiusano.github.io

He's posted screenshot's and videos about this project. He has allocated 3 months to this project to see where it goes.


This is truly next-gen. It's going to eliminate so much of the bullshit work in programming. Thanks so much for making it open source.

Plaintext must be killed. Long live real structure.


This looks really awesome. I hope it takes off! I'm pretty curious what it looks like though--as much as it shouldn't matter, syntax is very important.

There's a lot of overlap with how I want my language Mutagen[1] to work. Merkle trees of syntax, details persistence/communication handled transparently (or at least orthogonally to any other program logic). Where I think this is most important though (and this description of the project doesn't seem to mention) is optimization. Things like database de-/normalization, data locality, and really so much more--pretty much everything engineers do for performance rather than business logic--should be in the hands of the compiler. Mutagen has been on hiatus for a while, but I've been thinking about it a lot more lately, thinking where to go next. If anybody is interested in it, let me know--I'd love some contributors to work with, or even just people interested in optimization/compilers/functional programming to talk to.

[1]https://github.com/zwegner/mutagen


Show, don't tell.


It mostly reminded me of Spolsky:

http://www.joelonsoftware.com/articles/fog0000000018.html

> That's one sure tip-off to the fact that you're being assaulted by an Architecture Astronaut: the incredible amount of bombast; the heroic, utopian grandiloquence; the boastfulness; the complete lack of reality. And people buy it! The business press goes wild!


But if you're going to tell, be concise and coherent.


Yes, 1000 times yes!

I know the author has a lot to say, but even after patiently wading through the first few paragraphs I was not prepared to invest any more time in figuring out what was so great about this project. The article title was enough to get my attention, but I could not hang in there long enough to get excited about (or even remain mildly interested in) the project.

Edit: OK, it was probably more like 50-60% of the first page that I read through, not just the first few paragraphs.


Unless you're trying to be precise and detailed (as opposed to spewing startup advertising), in which case "tell" is exactly what you should be doing.


This all honestly resonates a lot with me. It sounds a little like LightTable. It is not explained best, that is for sure. I still need to find and see software, but provided there is one, I think his heart is on right place.


I think it's more analogous to Urbit than LightTable.


From what I know of his project, the best description is probably a vertically integrated haskell-like language, fully structured editor (think paredit without parens), and distributed framework.

So yeah, Urbit is a better analogue than LT :)


Except we don't have an editor ;)


Like Urbit but less crazy I'm guessing?


Sure, albeit that describes everything.


Thanks for that reference.


This could be interesting.

An observation: users have accepted that computing is organized as a constellation of rigid software appliances (“apps”) that never seem to work well together and certainly can’t be composed or extended

And yet, unless I have sorely misunderstood, Unison's editor is a browser-based thing, which creates a barrier much more rigid than the ones surrounding the tools I use today.

A question: does this just make web apps? I don't do webdev and nobody bothers to mention it these days when they presume all the programming you do is for the web.


> Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.

Off-topic but.. by using Lisp one can refocus that 70% to actually solving problems.


Lisps can automatically define models, serialization and deserialization for those models, and data persistence?


The language allows it to be easily done. The model definitions can be simply transformed into code that directly handles those individual model artifacts.

This is a super common paradigm in Lisp, especially when it comes to class/structure/data definitions, where support functions of any type can be generated straight from the defs. There is no "automatic" language feature there, because no definition of "automatic" covers all useful cases. It's simply easy to do because of the homoiconic nature of Lisp (code = data = code, with easy manipulation to transform between them), and no real distinction between compile time and run time.


Wrong. In CL there is a well defined distinction between read time, compile time, load time and run time.


Compile-time and load-time occur during run-time. The run-time context of anything already executed is available to subsequent compile-time and load-time contexts. Of course, these then further affect the run-time context going forward.

While these "*-time"s are distinguished, they can be invoked and interplay arbitrarily, unlike most other languages. It's all fundamentally run-time, in contrast to those, especially when considering threaded environments where compilation and run-time execution can happen simultaneously.


> Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.

> Another 25% is spent on explicit networking. We don’t merely specify that a value must be sent from one node to another, we also specify how in exhaustive detail.

>Somewhere in between all this plumbing code is a tiny amount of interesting, pure computation, which takes up the remaining 5% of developer time.

> These numbers are made up, of course, but if anything they are optimistic

Optimistic for your argument, that is.


Yeah in my experience, the real time sink is the disproportionate cost of making a decent user interface, regardless of framework or environment.


> There are no parse or type errors to report to the user, since the UI constrains edits to those that are well-typed.

It may sound counter-intuitive, but if your editor makes it impossible to type incorrect statements, it'll be really hard to learn/use/hack. Play is an important (essential) part of learning.

That said, I think there needs to be more experimentation in this area, so I hope you're successful.


On the subject of using nameless hashes in software development, to "solve" the naming problem, I wrote this article a while ago: http://thinkinghard.com/blog/DisorganizedIncrementalSoftware....



If you like this take a look at Inform7

http://inform7.com/


this...sounds...beautiful

I really enjoyed reading about functions being identified by their hashes, and what advantages that could have.


Link is currently unresponsive, but I think this came up on HN before - and had a similar flavour to TempleOS, in that it may be brilliant but is unlikely to ever make sense to anyone but the author.


How much of this exists? Is there a screencast?


I'm not sure that implicitly calling your target customers lazy and/or inefficient is such a great introduction.


I am sure in the history there were some slaves which were inefficient, and I wouldn't say that it was their fault. I think a lot of decisions are made without asking the developers. Those decisions then impose situations when planning for more than 3 months is meaningless, so we tend not to. And there we are quite inefficient, especially from an external point of view. Don't take offense where there is none.


Akka's location transparency gets rid of a bunch of the network bs that the first paragraph talks about.


What are your sources for the percentages in your about section? I have trouble believing they're real.


> next-generation programming platform

Sounds like a pretty BS claim for what appears to be vaporware (Also the claims they make on their page are pulled out of their ass "Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence." <- WTF???).


Will it have e.g. unsigned floating point types or it's the same old match-to-hardware approach?

PS: might be a false dychotomy but then next-gen is an exaggeration.


It's LightTable all over again.


back in the 80s CMU had an editor called "gnome" that also had the syntax built in and so you could not make a syntax error: http://www.cs.cmu.edu/~pane/ftp/ILE.pdf

i can't say i was a fan of using it.

but anyway, this isn't remotely a new idea.


Pro Tip: if I have to dig through eight paragraphs of fluff to find out what the fuck your product is, then you REALLY need to rethink the description of your product.

In particular, it was not at all clear what was meant by a "programming platform". Until I finally stumbled onto the description below, I thought it was just another web-based IDE.

> What is the Unison platform? At a high level, it consists of three components:

> * The Unison language: a powerful, typed, purely functional underlying programming language

> * The Unison editor: a rich, in-browser semantic editor for editing programs

> * The Unison node: the backend implementation of the Unison language, its typechecker, and the API used by the Unison editor and other remote nodes in the network

> The collection of Unison nodes (the Unison web) form a network platform in which data and functions may be trivially exchanged, with a minimum of friction.


If I understand it correctly it's like Typed Racket with a purely functional AST which is also a content-addressable Merkle-tree and a UI with paredit-style type correctness.


this was strangely entirely grokkable, thanks.


The author posted a followup to counter the impressions of vaporware: http://unisonweb.org/2015-05-07/update.html


....which is also 8 paragraphs of fluff and trying to connect with the audience


I understand clearly what it is; perhaps it's only fluff to you because it's aimed higher than your technical reading level?


Please tell me how things like "I am kind of kicking myself for making what ended up being a bad tech choice..." and "gak!" are directly informative of the point of the article? This article needs to take a tl;dr approach and cut down the statements that could be stated in one sentence. Maybe other people might want to know this guy's thought process behind everything, but I'm much more engaged with the cold facts.

1-2 months to research Elm and transition the Unison editor? Okay that's all I need to know.

Realistically you would probably only be able to work part-time because you are a paid consultant? Okay cool that's self explanatory, you need to make money and we get that.

This article is informative, but it could've been edited to a maximum word count.

P.S. Thanks for being mature. I willingly admit that I'm young and I have a lot to learn, but I do happen to know terms like HTTP+JSON and why a one-file-per-hash system is an insanely inefficient implementation.


This isn't a 'Show HN'.


Pro tips:

* Be courteous when offering constructive criticism (that means, for example, don't swear at people)

* Often it's not the project owners posting things on HN - many projects aren't necessarily ready for public consumption when they end up on here

* This doesn't even appear to be a "product". It's an open source project that seems to have been recently put out there

Absolutely typical that this is the top rated comment. Hacker News, you SUCK.


> Absolutely typical that this is the top rated comment. Hacker News, you SUCK.

Your other points are right (I winced at that "what the fuck your product is", and I bet most readers did), but this one isn't. Acerbic dismissals are a problem on HN, but they're far from "absolutely typical".

The trouble with that false generalization is that it conditions us to see the community the wrong way. Repeat too often that a town doesn't care about litter, and more people will be careless with their trash. But users here do care.

It's better to view this systemically, as a tragedy-of-the-commons problem that we all need to work on, than to make a big blaming judgment as if it were easy for things to be better. They should indeed be better—but it's not that easy. What is easy is to see it as everybody else's problem ("Hacker News, you suck"). In reality, anybody commenting on Hacker News is part of it, so we're talking about ourselves.


Agreed. It's disturbing that there are so many members who are seemingly intelligent, but either haven't read the guidelines or think that guidelines don't apply to them.


>Hacker News, you SUCK.

Very constructive and courteous.


As with many things in life, there's also a relevant xkcd: https://xkcd.com/927/


I don't see how that is relevant. The author's complaint isn't that there are too many languages, but that none of them are good enough.


There should be an xkcd about people overusing that xkcd in every possible context


This was the first thing that came to mind about a paragraph into reading


For anyone else with web programming library or framework: it helps to have code snippets and screenshots on the page.


Posterchild for TL;DR


Are you creating visual studio / visual basic but with some new language?


Can a day pass by without a new fucking programming language?


Why reinvent an editor from scratch? This is pretty much a guarantee that you will get zero traction.

Your time would be much better spent writing a plug-in for Eclipse or IDEA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: