"web standards" have extremely optimized ubiquitous implementations with extensive tooling. I strongly disagree with you. CANbus on boats isn't exactly bullet proof, people mess up their N2K networks all the time by just starring at it wrong.
All these gizmos are not safety critical. Anemometer? Wind on your face. Depth? Plenty of people go around with broken ones. SOW paddle? They're always clogged. Wind angle? Wind on your face, check telltale, check sails, check wind index. Particularly the BLE stuff are things that are nice to haves and a pain to physically wire (masthead instruments).
The only "safety critical" thing in this tech is the chart plotter (and GPS), and weather reports. Marine chartplotters are glorified consumer grade computers, and this OpenPlotter thing with OpenCPN is 1000% more reliable and not actively trying to kill you with proprietary licensing crap. Your phone is a great chartplotter and plenty of people cross oceans using nothing more than an iPad.
I wouldn't use HTTP/json for any path that needs to close a control loop at hundreds of Hz, but to loosely couple a bunch of systems in a plant? No problem.
If you have a specific concern, explain it, instead of just exuding judgment.
Parent comment is likely more evoking a bit of a https://xkcd.com/2030/ response, aka humour.
SignalK is neat. As you note not all protocols are suitable for all use cases, but marine sensor networks where most signals are in the realm of 1-10Hz it definitely has its place.
If anything needs to run a server to communicate with anything else, especially if wireless is involved (which often is the case with more modern forms of radar, gps etc. because of ease of wiring and installation) is a recipe for disaster if you're relying on interfacing with existing instruments for navigation.
Simply firing up cli for troubleshooting and debugging of any kind should not be "expected behavior" at sea.
> If anything needs to run a server to communicate with anything else,
What's a server? Do you have servers in a NMEA2k environment?
> Simply firing up cli for troubleshooting and debugging of any kind should not be "expected behavior" at sea.
No one said that arcane troubleshooting and debugging should be needed after integration of a system. I mean, NMEA2K certainly never makes one fiddle around and play with things aimlessly to make stuff work /s
> especially if wireless is involved
Most common way wireless is involved here is to get the information off-boat or to a tablet for convenient review of trends.
It’s your labeling that put “web standards” into one bucket. I assure you many of those web standards are far more secure and well tested for not only failure cases but also adversaries than what exists in marine systems.
You know... I've been working in an interesting combined AgTech and Aviation (drones, but big ones) for a while now and using JSON over Websockets for IPC is one of the best decisions we've made. We don't use it for everything, mind you, there's lower-level protocols that we use to talk to embedded hardware devices, but when we can we do. And while it's a draft standard, we basically riffed on a variant of this for most of it: https://datatracker.ietf.org/doc/html/draft-inadarei-api-hea...
The reason I love it so much is that it's just so straightforward to make server or client that can talk to it. All of our embedded Linux systems are written in C++ right now and they have absolutely no problem publishing and consuming messages in our standard format. One of the original driving factors for this is that we do have some web-based and Electron-based UIs and any protocol that we made that wasn't HTTP-based or Websocket-based would require them to do twice as much work: first, connecting to whatever service from a "backend" server and implementing whatever protocol it needed, and second exposing that backend service to the frontend over a Websocket (generally... since it needed live updates). By standardizing on our in-flight services just exposing everything as Websockets natively we pretty much eliminated a whole tier of complicated logic. The frontends have a single generic piece of code that has standardized reconnect/timeout/etc logic in it, and the backends just have to #include <WSServer.h> and instantiate an object to be able to publish to listeners.
I definitely didn't start there. And I 100% understand where your opinion comes from... from so many different angles a lot of the "modern" web systems shouldn't come within a mile of a safety critical system. Websockets though? They're great! And while JSON isn't necessarily the most efficient encoding, it sure does make debugging easy. We run everything on a closed network that usually doesn't have an Internet connection, so we don't run TLS in between the ground and air systems. If we need to figure out what's going on and an interface is acting up, we can just tcpdump it and have human-readable traffic to inspect.
The flight critical stuff is isolated from all of this and spits out a serial telemetry feed (Mavlink). We do send that directly to the ground station over a dedicated radio, but we also have an airborne service that cooks that into Websockets and in many cases the Websocket-over-very-special-WiFi connection has been more robust than the 915MHz serial link.
And it's not as if existing protocols like NMEA are all that good either.
Thanks for sharing that! Very interesting. There is even less margin for error in air compared to at sea. At least you can still float if the power goes out and at that point a sewing machine is all you need for most critical problems past that point.
We've actually leaned in pretty hard on using "standard" protocols as much as we can:
- We have a flight planning module that takes multiple polygons as input and returns a (large) list of waypoints for covering the regions that the polygons cover. When I was trying to work out the request/response format I decided to use GeoJSON with some extra properties added. You submit the GeoJSON boundaries with a POST request, the planner does a bunch of computational geometry and graph algorithms, and returns back a GeoJSON. If you want to, you can just load the flight plan up in QGIS or ArcGIS or whatever and inspect it directly.
- We also accumulate quite a bit of geospatial data that we need to post-process. We used SQLite with the Spatialite extension to store that. Same story as the flight plans... you can really easily load it into QGIS or Geopandas or whatever you want and do your analysis
- We need to stream video down to the ground station and ended up using RTSP, h.264, and GStreamer to do that. You can connect to the video feed using our ground station software if you want, but you can also just connect to it using VLC. And internally this meant that if we wanted to do hardware-accelerated encoding it was just a matter of changing the GStreamer pipeline. Or... if I get my way over the next month or so, we'll be adding a HUD with extra telemetry right into the video feed, again using GStreamer plugins.