You can mimic selective receive using a router process. Essentially, you could generate one channel for every branch of the pattern match, then generate a process which reads from the input channel and deals the message to the appropriate branching channel.
It's not quite clear what patterns will emerge with respect to UI event handlers, so there really isn't an ELI5 answer yet in the View layer. The backend advantages are well covered by the Go language community, so check out Rob Pike's talk [1]. As for other parts of the frontend, like the model layer, channels can lead to a major simplification for the callback hell involving multiple coordinated network requests or for transforming and processing events off a message socket, like that of Socket.io.
The IoC threads work by converting park-able functions into Single-Static Assignment (SSA) form [1] and then compiled to a state machine. Essentially, each time a function is "parked", it returns a value indicating where to resume from. These little state machine functions are basically big switch statements that you don't need to write by hand. This design is inspired by C#'s async compilation strategy. See the EduAsync Series [2] on John Skeet's blog, and the compilation post [3] in particular. Once you have these little state machines, you just need some external code to turn the crank.
You can build analogous versions of map/filter/etc on top of channels. The core.async team is still focused on the primitives, but surely some higher order operations will follow in the not too distant future. I suspect we'll need to wait and see what patterns develop, since they will surely differ from Go a little bit, due to the functional emphasis on Clojure.
I should also mention a key difference between channels and Rx/Observables: The fundamental operations for observable sequences are subscribe & unsubscribe. The fundamental operations for channels are put and take. In the push sequence model, the consumer must give a pointer to itself to the publisher, which couples the two processes and introduces resource management burdon (ie IDisposable).
You can think of a pipeline from A to B in the following ways:
Sequences: B pulls from A
Observables: A pushes to B
Channels: Some process pulls from A and pushes to B
That extra process enables some critical decoupling!
> since they will surely differ from Go a little bit, due to the functional emphasis on Clojure
Very interested in this. The trivial Go examples seem much more channel as side effect.
While I still need to wrap my head around the differences (and thanks for the explanation) one quick take away is how much easier it would be to write the higher order operations with core.async channels versus observables.
It may sound reasonable, but ultimately is against the universal spirit of the web, and thus should not be standardized. Your #1 should be No, because if it were Yes, we'd have a mess of mutually incompatible, vendor-specific, proprietary languages fragmenting 3D content on the web.
Mutually incompatible, vendor-specific, proprietary anything is an inevitability. #1 isn't about that. That's what #2 and #3 are about.
#1 is about planning for extensibility. Just look at the hackery with JS where lonely, otherwise ignored, strings are used for things like "use strict" and "use asm". Or where Microsoft added "conditional comments", which quite frankly, was essential to the development of Outlook Web Access, which basically gave us Ajax. Or all the absurd vendor prefixes on CSS tag names. Or one of 100 other little hacks that browser vendors have invented to try to innovate past the standard. Pushing pass the standard, by the way, is the only way forward. We've learned that lesson by now, so we should plan for extensibility.
OpenGL already has a mechanism for extensibility, and proprietary junk is only an inevitability if we allow it to be enshrined in open standards. There is no reason to accept proprietary DRM plugins in CDM, and there is no reason to accept proprietary shader languages.
The reasons are manifold, but here are a few:
- Standardizing non-standardness gives proprietary implementations an unwarranted air of legitimacy and blesses incompatibility.
- Proprietary plugins and extensions are more likely to have untested security vulnerabilities and widen the browser attack surface.
- Proprietary extensions violate the essential web principles of cross-platform compatibility, graceful degradation, progressive enhancement, and accessibility.
I think it varies greatly with many factors, but agree that it's probably closer to 20. I'm just trying to avoid selling silver bullets.
> Has anyone ever measured a Reading Comprehension score for languages?
While language is a huuuge component of that constant factor, there are certainly other components. For example, if you want to write a sudoku solver, you should start by writing a backtracking constraint solver and then implement the constraint solver on top of it. You'll write less code than trying to solve sudoku directly, but that code will certainly be slower to read. This applies within a language just as much as it does across languages.
> There may be situations where you want the entire history of your development to be included on your live server, but often this just isn't appropriate.
Are you concerned about being wasteful with disk space? Or is there some other concern here? Some security issue perhaps?
I once committed my DB settings (Mercurial) and noticed my mistake only later. It's very hard to get it out of the history. Ofcourse it could be fixed but this is one example.
Imho version control could be used for deployment but only when you use the release-branch of your project.
And ofcourse NEVER put your config in version-control ;)
And of course NEVER put your config in version-control ;)
I'm not sure I'd make that blanket statement. Version control seems like a great place for configuration. It allows you to centrally manage configuration details and provides an audit trail for debugging. You just want to make sure it is in a separate, secure repository and not mixed in with your app development.
FWIW, removing things from history entirely is easy with git using git-filter-branch. The real problem is realizing that you need to do that in the first place.
> And ofcourse NEVER put your config in version-control ;)
Why wouldn't you want to version your configurations in general? Maybe I am not sure what you mean by "config" but in general version controlled configuration is always good. I even set up git in my etc sometimes to track changes I make to it manually (not in production, on my home machine).
I think what everyone is getting at here is to be smart about when to use git for deployment. Rsync is great for small static sites the same way git is. Now, I'm not going to use it on my medium to large size web app (actually I do for the testing server but that's another story). It's just another way to get a site deployed. I think its great for small static sites and prefer it over rsync for no other reason than I'm already using git and its just an extra git push when I'm ready. I actually use different remotes for deployment rather than a branching strategy.
Mostly it's because it seems people are using Git to deploy without a good reason. At least I haven't heard of an advantage enjoyed by those using Git for deployment.
There are some obvious disadvantages, so what is the compensation? It seems the only reason is that it's easy to type "git push". But of course any deployment method can be wrapped in an equally easy script command.
Okay, I have to admit to knowing of one advantage, and that is that only the delta of your changes will be transmitted over the wire, rather than a complete checkout. It's just that in practice the savings aren't usually enough to warrant the potential downsides. For my money i'd prefer rsync or any number of other solutions.
The way we have it set up in capistrano, git is used as the distribution mechanism. It offers a lot of flexibility in that you can deploy a tag or a revision hash or whatever without having to worry about consistencies between users' machines or having to deal with an external packaging machine.
Once the repo has been fetched, we just check out the right tag/revision and do a local copy from the git repo into the app directory. At this step you can exclude .git if you want.
This process has an advantage over direct git checkout in that if you (heaven forbid) ssh onto the server and directly modify anything, you won't end up with conflicts.
Are you 100% sure that there is nothing you are exposing via your git repo that you want to keep away from the person who manages to hack your server or discover some means to reach the repo externally?
Getting hacked is not inevitable, but if you treat your systems as if it were you'll be a lot safer if it does ever occur.
If you push via git or via rsync, you're typically going over SSH in both cases. As far as the .git directory, my post-receive hook also does a "cp -R" of the files to the actual web-served directory (there's a build step in between anyway), so there's no .git exposed. As far as security, as long as one knows to handle the .git directory, there's no difference.
1) You can go through the trouble of identifying and resolving all the edge cases that you encounter when using Git as a deployment tool. Keeping in mind that one of those may result in an embarrassing security disclosure. Woops.
2) You can use a deployment tool that was developed for that purpose, has existed for years, and has had many sets of eyes on it; many of which are inevitably more experienced than you. And you still may end up with an embarrassing security disclosure, but the chances are better that you'll hear about it through responsible disclosure channels first, rather than waking up at 3 AM to the voice of your boss/client asking why the site is redirecting users to buy Viagra at a discount.
A bonus third choice:
3) You look at existing deployment tools and ask yourself "I wonder why they do that?" Then, maybe ask around a bit. Once you've got a good idea of all idiosyncrasies involved with deploying software, then you embark upon building your own tool. I think you'll find that simply `git pull`ing from your httpd document root and `rm -R`ing the .git/ directory won't be your final solution.
The audit of commits in the general case (Checking for errors in the code) and the audit for the deployment case (Checking for sensitive data that may be exposed in a security breach) are different audits. I don't think many tend to do the second kind of audit.
Also, minimizing your exposure in case of a security issue is probably a good idea, so the convenience of deploying with git may or may not be worth this extra exposure.
> if you fork a project and it becomes incompatible with the upstream, rename it
I agree completely, but I'd like to take your idea further in a direction you likely didn't intend.
The fundamental observation of distributed version control systems, in my opinion, is: Every commit is essentially a fork.
When you combine these two ideas: 1) fork->rename and 2) change==fork, with the 3) identities & values from FP/Clojure/etc, you realize that version numbers are complete folly.
In short, if you have awesomelib and make an incompatible version, you can call it awesomelib2. Or you could call it veryawesomelib or whatever else you want. If you give up on the silly idea of being able to compare version numbers, then versioning and naming become equivalent.
Version numbers only appear to be folly because proper engineering discipline has not been applied when managing the stability and/or backwards-compatibility of shared interfaces.
If more developers cared about versioning their software appropriately based on incompatible changes or stability guarantees, it would significantly reduce the costs of maintaining OS software distributions and providing integrated software stacks to users.
> Version numbers only appear to be folly because proper engineering discipline has not been applied when managing the stability and/or backwards-compatibility of shared interfaces.
Encoding intelligence (beyond, perhaps, simple sequence) in version numbers for software is fundamentally folly.
Encoding intelligence about compatibility in version numbers of APIs is only folly to the extent that "proper engineering discipline has not been applied when managing the stability and/or backwards-compatibility of shared interfaces."
Confusing what makes sense with software and what makes sense with APIs is as problematic as any other confusion of interface with implementation.
> If more developers cared about versioning their software appropriately based on incompatible changes or stability guarantees
But they don't. And you're not going to be able to make them. And even if you did, people would disagree about what constitutes compatibility, stability, and engineering disciplin. One man's "breaking change" is another man's "that was an implementation detail". It's not possible to get this right, since first you need to define "right". That's why versioning is folly.
Application uninstalls are as trivial as dragging the application to the trash bin. No, this will not eliminate the application's data from ~/Library, etc, but 98% of the time you don't want that anyway. If you know what you're doing, it's usually a quick `rm -rf ~/Library/...` and you're done. Some poorly behaved apps stick stuff in other places or otherwise muck with your system, but now with the app store, that's no longer an issue.
And, if you're absolutely anal about deleting every single trace of an app, there are tools that automate the process. For example: http://www.appzapper.com/ -- But really, it's probably a waste of your time unless you had a badly behaved app go rouge. In my many years of Mac ownership, I've installed and uninstalled hundreds of apps and the only time I ever had to bang my head against the wall was when I used to use MacPorts and a Postgres install went haywire because of the same sort of packaging nonsense that the article is talking about.
> Application uninstalls are as trivial as dragging the application to the trash bin. No, this will not eliminate the application's data from ~/Library, etc, but 98% of the time you don't want that anyway.
When uninstalling an application, you usually do want to remove all of the application's components. How often do you say, "You know, I'd like to uninstall 25% of this application, even though the remaining 75% will just be dead weight without it"?
The two kinds of things there are configuration/settings, and the user data. When you uninstall Adium (for whatever reason), uninstalling the chat logs from the last five years is probably not part of your expected outcome. Uninstalling an application shouldn't reach into your homedir and delete things, either on Linux or OS X.
I do expect when I delete, say, Steam, it won't leave 20 GB of games I can't play sitting around in a totally invisible place. Similarly, when I delete some video or sound editing software, I don't expect it to leave several gigabytes of samples and filters lying around. Both of these are real situations I've encountered when people came to me asking why their hard disk was so ridiculously full.
I can understand not deleting things out of ~/Documents, but a lot of stuff that goes in the Library folders is not what users think of as data that should outlive the application.
Steam itself is sort of a package manager, so that's an interesting edge case...
However, I think that there are basically three categories of application data
1) Documents -- These should never be deleted and are not invisible
2) Settings & other small data not worth deleting, probably nice to keep around in case you ever re-install. Most stuff.
3) Large semi-temporary files, like samples and other downloaded add ons that are optional parts of the application
I think OSX handles 1 & 2 well, but you're right, it needs a way to handle #3 too. However, I think that #2 is a much better default than #3.
I can see arguments both ways on things that are technically recoverable but might require lots of download time (like 20GB of games). However, your settings in those games (if not held on Steam's servers; I don't know where they are) should never be deleted by the application. If you're saying that non-recoverable settings and such should go in ~/Documents, I disagree with that, too: things in the Library folders aren't what the users think of as data, but they are stuff that the users will be upset are missing if they uninstall and reinstall.