I believe Q/Kdb+ to have the most natural implementation of IPC that I've seen, as long as you can keep all your gear inside Q.
To connect to another host you just do:
remote:hopen `:server:port;
The port number is specified with -p on the command line. To send messages:
answer:remote "sum x"
or if strings are repulsive to you you can pass in an expression like: answer:remote (`sum;x). `sum here is a symbol which is like a reference to a variable (or function). You can also pass a literal function for the remote host to execute, since they're just data.
You can also do asynchronous calls using the negative value of the handle (which is just a file descriptor int, wildly enough): neg[remote] (`incViews;`welcomepage;1)
Completely natural and simple. The only issue that I'm aware of is a 2gb limit for each message. Built-in functions can split your data into blocks to transmit large amounts.
The in-memory representation of data and the network representation are essentially the same. So there's no costly per-byte packing or unpacking going on, simply a memcpy(). In terms of performance and power use (environmental impact) this is a huge win.
It's trivial to expose data to the outside world by creating a web server. Simply define a function called .z.ph (or .z.pp for POST). This one evaluates the URL requested as a Q expression and returns the result as json.
.z.ph:{.j.j value x 0}
Being able to build services as simply as this allows you to begin to think about the microservice as the unit of program composition, rather than classes, functions, or modules. Microservices have their own issues, but that's a topic for a later rant.
Looks to me like DBus is actually pretty good and just needs some good implementations, some bug reports and some active contributors. I'm not super familiar with non-server applications in linux, but isn't DBus also in widespread use?
Most linux server apps I know communicate through the filesystem by leaving a .pid,.lock and/or a .sock somewhere and then setting up a custom channel using a handle to that. It feels kind of hacky, but at least you get to control the crappiness.
kdbus actually supports a whole load of fairly sensible communications models, with a reasonably simple API, and ditches the horrible bits of dbus. It's a shame that it's (unnecessarily IMO) bound up in controversy.
It doesn't seem very general purpose unless there are also non kernel implementations, and you can't sanely remote it as it relies on the kernel for authentication.
Its purpose is single host communication. If you assume a single host then you don't have to deal with distributing computing problems at all.
Not sure why having non-kernel implementations is a problem. Is the fact that there is no non-kernel implementation of Unix sockets / TCP/IP / filesystems / name-most-other-Linux-kernel-features a problem for you?
BTW I don't especially care about kdbus, but I do hate that there are no good, featureful communication primitives in Linux.
The article was looking for multihost too which is why I mentioned it.
There are some non kernel sockets implementations, but it is less of an issue as every kernel implements them, but kdbus is not implemented anywhere yet. It is also not possible to implement correctly in userspace as the security guarantees rely on a kernel, which is inconvenient. It is unlikely eg FreeBSD will ever implement it.
The problem with DBus is that it's been around for ages now and still most developers say they support it because they have to, not because they like it. Basically, it's not going to get much better than it already is, or by now it would have.
Providing an easy-to-use interface is the job of D-Bus client libraries, not the protocol itself. Client libraries are still a wild frontier, not at all settled.
I thought sd-bus was considered a good D-Bus client library? Haven't had the need to try it out myself, but the name has been popping up recently when dbus has been discussed.
ITts a lot better. I've ported libdbus-1 code to sd-bus and in practise I saw 100s of lines are replaced by about 20, with better error reporting on top of it.
Encoding complex objects is just hard (in the sense that it's going to get ugly, complicated, and usually both); however, just getting framing right (i.e. ensuring that client and server agree on where messages/objects start and end) at least solves many security problems. A simple binary (tag-)length-value works, as do schemes based on http://cr.yp.to/proto/netstrings.txt.
Using new line delimiters makes the strings a little prettier, and every IO library has a function to read a line, sparing you the trouble of looking for the colon yourself.
gRPC is probably worth mentioning in this thread (new since the article), though I haven't used it yet and have no idea if it can satisfy his asynchronous requirement.
He considers using message queues and dismisses the idea because "you're still on your own in implementing an RPC mechanism on top of that. And for the purpose of building a simple RPC mechanism, I'm convinced that plain old UNIX sockets or TCP will do just fine.".
In my experience, getting an arbitrary sized message from A to B over a potentially unreliable network is the difficult bit and implementing a basic RPC mechanism is pretty easy. I did a project that required a lot of IPC and ended up using nanomsg (also not mentioned in the article.. it's very similar to zeromq) for [fast] reliable message delivery and wrote my own basic RPC layer on top of this (NanomsgRPC - currently C# only ..). This worked pretty well for me.
A note on nanomsg though: I wouldn't consider it stable enough for production use, but that said, it didn't give me any problems for what I used it for.
To connect to another host you just do:
The port number is specified with -p on the command line. To send messages: or if strings are repulsive to you you can pass in an expression like: answer:remote (`sum;x). `sum here is a symbol which is like a reference to a variable (or function). You can also pass a literal function for the remote host to execute, since they're just data.You can also do asynchronous calls using the negative value of the handle (which is just a file descriptor int, wildly enough): neg[remote] (`incViews;`welcomepage;1)
Completely natural and simple. The only issue that I'm aware of is a 2gb limit for each message. Built-in functions can split your data into blocks to transmit large amounts.
The in-memory representation of data and the network representation are essentially the same. So there's no costly per-byte packing or unpacking going on, simply a memcpy(). In terms of performance and power use (environmental impact) this is a huge win.
It's trivial to expose data to the outside world by creating a web server. Simply define a function called .z.ph (or .z.pp for POST). This one evaluates the URL requested as a Q expression and returns the result as json.
Being able to build services as simply as this allows you to begin to think about the microservice as the unit of program composition, rather than classes, functions, or modules. Microservices have their own issues, but that's a topic for a later rant.