Interesting idea but I would offer an alternative to JSON for device<-->dweet communication. JSON parsing on small embedded processors is often memory- and CPU-prohibitive. A small (preferably binary) protocol is lighter and faster to work with on these little microproccessors. When I built my Lightcube[1], I designed a binary protocol[2] that is easily parsed on the Arduino or even the smaller ATtiny microprocessors. Designing and implementing the protocol was a learning process for me but I ended up with something that didn't tax the CPU, leaving me more processing cycles to interact with my hardware.
With JSON, it's a trade-off. You're either doing the heavy lifting on device side or you're doing it on the client side.
This was my first project where I worked with binary and I learned some neat tricks. For example, the simulator that runs on the Arduino has to downsample the colors to work within the constraints of the 3-bit color that the "Ansiterm" library offers. When I first approached the problem, I was envisioning a complicated series of "if" statements to map the 8-bit color to 3-bit. Then it hit me: I could simply mask all but the highest bit of each of the 8-bit R/G/B colors and use that to generate my ANSI color blocks. This is something that I might not have learned if I was using JSON to pass the data.
Agreed, in the code-readability sense it's a little on the strange side, but I wouldn't call it completely nonsense either. My thinking behind the design was that developers are a lot easier to deal with than end-users/laypeople who you might rely on to help debug things. Besides, it's easy enough to write wrapper functions or use DEFINE (in the case of C/C++) to make the code more code-readability-friendly.
Nothing is perfect for everyone, and I think the response is probably the least important aspect of HAPI. The biggest bang for the buck in my mind is self documenting URLs and support for only HTTP-GET verbs. Just my $0.02 :)
It depends on what you want to support. I've done similar as I was working on platforms that didn't actually support POST/PUT/DELETE. However you then have to contend with misbehaving caching too.
So let's not a call it a security issue then. Let's call it an "it's too easy to delete" issue. And if that's the case, then that's what we're going for— easy. Remember, if you were truly protecting something that was secure then you would require a security token parameter— which BTW is something we're planning in the next rev, for people who want to protect their machines.
If I was phishing to get you to click on a link to delete a resource, then I would need to know that token, and if I knew that token, then I could just delete it myself. Note that the HAPI spec discourages the use of cookies (which I agree could allow a phishing attack if you were using cookies as a security mechanism).
Cool and good luck! I'd be cautious about the name sounding so close to Tweet, especially when you advertise it's "like Twitter." Less impact changing the name now than later.
It reminds me why we even make these things available online. Why do machines need to talk to each other? If they need to collaborate in order to complete some tasks, they should be controlled by human or program.
You can make these things online talking HTTP as toys you are playing with your childhood friends, but not for serious business products. It becomes a trend now. Kidding.
From two aspects:
1. Cost: in order to make them talk HTTP or API, you need all of them carry a web server which is installed on top of OS. It's not necessary. Machine needs very simple commands to drive them to perform certain tasks because they already know how to do it, you only need to give them instruction on what to do.
2. Security: machines at home should be controlled inside the home or by computer controlled by home users, which could be sitting in the cloud with other OS level of security instead of HTTP or HTTPS. If you are going to expose all of your device status to the internet, do you feel secure? If you do want to control remotely from home, you need to find some way to make schedule or via the cloud.
> 1. Cost: in order to make them talk HTTP or API, you need all of them carry a web server which is installed on top of OS. It's not necessary. Machine needs very simple commands to drive them to perform certain tasks because they already know how to do it, you only need to give them instruction on what to do.
An Arduino can run a web server. It's amazingly cheap to run HTTP.
> 2. Security: machines at home should be controlled inside the home or by computer controlled by home users, which could be sitting in the cloud with other OS level of security instead of HTTP or HTTPS. If you are going to expose all of your device status to the internet, do you feel secure? If you do want to control, you need to find some way to make schedule or via the cloud to control them.
My door lock, thermostat, and smoke/CO detectors are already controlled "via the cloud". In for a penny, in for a pound. You're always free to not buy the devices.
I need to clarify my points a little bit. I don't mean we don't need controllable devices. I need door locker, window slider for years. But I will not buy a thermostat for > $100, because the way to produce the "things" is not economic.
We can have centralized program to control all the things at home. Two usages:
1. Need to control from remote: we should control them via OS level security to reduce the unnecessary cost to speak HTTP no matter how cheap the OS is. If they can be controlled directly, why do we need them to understand HTTP?
2. Need to talk to each other: they can talk via the controlling program. They don't need to be that intelligent to talk to each other via HTTP.
By the time when we have every "thing" at home, you can see the difference. The cost must be reduced to almost nothing in order to make home automation popular with very low additional overall cost compared with the existing devices. Again, think about it, we'll have 100 things at home, are you going to buy every thing for $100? Not necessary, not scale.
Some sort of pun on std.io might be fun, but then if you do use that, the whole normal-people denotation of "STD" is a little bit different than that of programmers.
Very cool. One suggestion for the product page is some simple use-cases for why you'd use such a service. Also, are there size constraints for the dweets?
Basically any device or thing that has an internet connection can publish information about itself so that it can be consumed externally. For example if you had a BeagleBone microprocessor monitoring temperatures and you wanted to share those temperatures with other people or machines on the cloud, you could do it easily with dweet.io. It's basically a really simple pub/sub platform.
This is awesome! It used to be somewhat inconvenient to set up a simple message-passing backend for my Arduino Ethernet. This is exactly what I needed to eliminate that mundane task. To the creator, my eternal gratitude!
Nice! One thing I would like to see is some way to reserve names so that only I can use them – right now if I'm publishing data to http://dweet.io/follow/outdoors , anyone can easily override that with http://dweet.io/dweet/for/outdoors?no=temp&for=you - so that'd be a neat feature. Unless the idea is that handles would be shared between devices...
Hey, this is awesome. Would you mind if we sent a tweet about what you did and link to that code? Any chance you might be interested in writing a guest blog on how you connected?
Yes this is by design to make it easy. Being able to lock things to make them private is an upcoming feature. No promises, but probably will be available this coming week.
I'm currently working on a similar system aimed at robotics. M2M is rapidly growing under the radar due to mobile phones making a lot of the technology needed cheaper. The interesting bit here is the protocol. Which still feels a bit off to me, but still works well. Designing protocols is not an easy task. Well done Bug Labs.
@jheising Send me an email. I'd like to get a friendly conversation going on. :)
I kind of like the HAPI concept but it feels strangely redundant. Especially in the responses, where almost half of the k/v pairs are just for "decoration" purposes. I concede that it does make things a slight bit easier to read, but are you guys sure it isn't bordering on redundancy for the sake of redundancy?
It might seem like it, but it's actually not that different from a lot of REST APIs I know of. Most REST APIs return a status (in our case "this":"succeeded"), and some sort of wrapper for the actual data (in our case "with":{}). The only thing that might be redundant is the "by" property, but I think it's useful to have in cases where you want to make doubly sure that the API did what you thought it was going to do.
Many parts of the HAPI spec seem to be in outright conflict with the established best practices and standards. E.g.
> A HAPI must support the HTTP GET verb on all operations.[1]
Contrast this with HTTP/1.1 RFC 2616:
> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe".[2]
Correct. Damned be the HTTP specs and REST because they are such a pain in the ass :) I built HAPI precisely because the current standards only work in a world where everyone knows how and enjoys using curl. I think you should be able to paste a URL for an API anywhere and it should just work.
For example: Here let me show you how to delete that resource using our API... Oh wait, damn. I can't show you because I have no way of sharing a link with you because it requires the DELETE verb. Just go read this documentation and get back to me when you're done. ;)
Submitting a form with JS is a whole other level of complexity than just having a link out there in the wild that performs write operations. And using a CSRF defeats that stated intent.
No promises yet, but we plan on offering paid plans that could offer SLAs on latency. Although I will say we architected it from the get-go to be high throughput and low latency— it's all using redis in-memory on the back-end.
Right now there is nothing to protect because the API is inherently open (on purpose to make it easy). But next week we will be introducing a security token mechanism to be able to "lock" machines. In this case HTTPS will be important and will be required. So yes, it's coming...
Nothing too fancy--just submit a git repo and/or sample website showing off a clever use of your software. For the prize, maybe bragging rights, maybe an API key, maybe a pet elephant; just something fun, you know?
It's a pretty low-level product, right--basically a messaging fabric in the cloud that is easy to use. I can think of some delightfully perverse things to do with it, as can I imagine others.
If you want something which can be plugged into a random network somewhere in the world and be reasonably capable of phoning home HTTP is your best bet for being not being firewalled.
There's also something to be said for using a well-known protocol rather than something obscure which is likely to require time to debug because it's had many orders of magnitude less usage.
Your point is valid, and you gave suggestions as well.
I agree with you that machines should not talk on the HTTP level, it's over kill. Machines only need very simple protocols to exchange current state if they do need to cooperate. To me, if devices can transfer state with the controlled program should be good enough so that the program will send the states to other machines for them to determine what's going to be the next step or directly give them instruction on what the next step to go.
I know it's a lot fun to build this type of cute machines, but it's not economic.
Our only use-case was ridiculously simple. I don't think CoAP or MQTT fit into that use-case, however at some point, yes we will add other protocols. Thank you.
WOW! Bug Labs, you guys are still around. I remember back in the Summer of 2006 when the BugBox was the hot product. I was trying to build a real-time geo-tagging service to tag speed traps using the BugBox[1]. Then 6 months later, Steven Paul Jobs came though with the hand of magic, announced the iPhone with all of the sensors that were in the bug box with a much better product and simultaneously erased the Bug Labs from my mind. As far as http://dweet.io, this looks like a very good implementation of a message bus. The two things I'm worried about with this project are performance and concurrency.
Lets start with performance. If you're trying to get a predicted 26 billion devices by 2020[2] on a Node.js/Socket.io Framework, you're going to need so many machines, the business case wont be viable. Socket.io is great for message passing, but at high throughput, it's going to fall short. Message busses like ØMQ (ZeroMQ) are a lot lighter and can bind to file descriptor without requiring spawning up an HTTP server. As for Node.js performance; It's great for JSON serialization (Which you're going to be doing a lot of), but it's far from the fastest. Node blows away languages/frameworks like Ruby/Rails and Python/Django in speed when it comes to JSON serialization (And almost anything else), but it's still in the 30th percentile for JSON serialization which is very low[3]. With this being one of your most performed operations, this is something that should be as fast as possible.
As far as concurrency is concerned; Node.js isn't an inherently concurrent language although it does have it's non-blocking I/O callback magic. Obviously, you can use modules like cluster, the native process spawning or backgrounder[4], but the weight of threads is going to be so expensive compared to the amount of work that's needed to be preformed. Supporting concurrent paradigms like RUN.At, RUN.Until, RUN.After, RUN.Every, RUN.WithTimeout, and RUN.BatchJobs are easy to do in Node.js, but then getting those individual processes to talk to each other in an orderly fashion using callbacks and Socket.io seems like duct tape and chewing gum over just using a language that supports concurrency natively.
Other than that, I love it, I understand the vision (because we're building something similar), and I'm very glad that you guys open sourced this project.
Thanks for the feedback! To be clear, the node.js library is just a client-side library, so it should only be technically running on a single machine in most cases.
Also, we only use socket.io for real-time pubsub, but you could just as easily use a polling mechanism with HTTP to get similar results if you're worried about the performance and overhead it carries.
I think perhaps the most important thing is to be very clear about our goals here. Dweet.io is NOT built for super low-latency pub/sub. IMHO most devices in the future of IOT won't need to communicate to the cloud more than a few times every minute or at the most once per second. For the devices that need low-latency (sub-second) pub/sub, I agree, you should look at other protocols like MqTT. But if I were a betting man, I'd say the vast majority of IOT devices in the future will not need this level of performance to warrant the extra headache.
[1] Lightcube: https://github.com/chrissnell/Lightcube
[2] Lightcube protocol: https://github.com/chrissnell/Lightcube#lightcube-protocol