I discovered this article a while ago when I was toying around with using plan 9 as it was designed, as a fully transparent distributed operating system. After reading this article I was a little shocked because the author didn't do any configuration or setup for the other machines, so I thought, wow! Plan 9 was so advanced that all you need are multiple machines present on the same network and it will automatically 'clusterize' them.
So I went out to make my own cluster and realized that no, sadly, the author simply did not activate nor configure the other machines at all.
I still love plan 9 and still use it on the occasion, but I'd love more detail on how to actually set up a plan9 cluster and really 'get into' the power of distributed computing.
What did this have to do with plan 9? This post is mostly about building a cute case and keyboard for 4 raspis. No clustering and almost no plan 9 involved at all.
> The project provided an excuse to make use of a Vortex Core 47-key keyboard, which together with a mini white HDMI monitor, provided a particularly compact and suitably futuristic feeling setup.
As for me this keyboard doesn't seem futuristic at all. It reminds me of the first computer I experienced in my life.[1]
I mean, it is lovely keyboard, I really like it. Just "futuristic" is a strange way to describe it. It would be even more nice, if some keys of bright colors were added. Like in MK-61[2]. Though photo in wikipedia doesn't capture it (or maybe my memory is wrong), but those blue key was of a so beautiful shade, I really loved it in my childhood and was eager to learn how to use MK-61 just for a joy of using this very key. Yellow and red keys were also likeable but not so much, moreover the function of red key was trivial and I lost interest to it very quickly. But the blue key was beautiful and mysterious... My first love. Maybe I became a programmer due to color of this key.
Presumably through Vortex'[0] similarity to the Symbolics PN 365407[1][2] keyboard, with its very distinctive design that manages to capture at once both futurism and timelessness.
Disregard the 2020's clunky cherry switch keyboards, acquire the OG Best Keyboard.
I believe you are right on spot with that - the minimalism, of the keyboard, case and the Plan 9 UI evoke some sort of timelessness, the same way a Connection Machine still looks futuristic today.
It would have been nice to know how the software side of the clustering was set up. How easy/painful was it to let the different pies know they were part of a single cluster, how easy/painful was it to have different programs seemlessly use resources from different pies, etc.
On plan9, all your services are file based. The difference between a cluster and a standalone setup is just whether you mount local instances of a service, or remote instances.
For a cluster, you'll usually have a cwfs/hjfs (persistent storage) server, a faktotum (auth) server, and one or more CPU servers, and possibly a terminal machine.
You can then freely use or connect to CPU servers. As everything is shared, and the GUI is also file-based, everything will feel local.
Depends on the latency which is the achilles heel of 9p and plan 9 in general. With 9p, for every T message there's an R message that is waited on before the next T is sent. So if there's 100ms of latency you have to wait 100ms between EVERY 9p message. Pain and suffering. The labs people never focused on performance but actually building a clean comprehensive system with the small team they had. It was a pragmatic approach for sure and compromises were made.
Though when using local ethernet, heavy graphics isn't that bad. I can play doom over local ethernet using drawterm to a cpu server which is actually running the doom processes. Of course blowing up the game screen makes it suffer as doom doesn't use dev draw.
Another thing to realize is that plan 9 graphics are done via dev draw which is a sort of 2d graphics engine that you load text and bitmaps into then issue commands to draw and move them around on screen. So a local dev draw on the terminal only receives draw commands for the loaded resources so there is some caching and heavy lifting done at the terminal end. It's not like vnc or rdp where you are viewing a remote frame buffer.
> With 9p, for every T message there's an R message that is waited on before the next T is sent.
This is entirely false.
The 9P protocol is asynchronous, can handle up to 16k outstanding requests, supports out-of-order responses, and includes support for arbitrary request cancellation,
The only thing thing that blocks to wait for an R message are blocking syscalls, but that doesn't stop other processes from sending further T messages.
Yeah, using Plan 9 for the graphics rendering stuff Hollywood was using Irix and NeXTSTEP for, wasn't never on the team's roadmap, specially when on Inferno the UI was based on Tk.
Cinematic graphics rendering has nothing to do with UIs and the technologies used there.
Plan9 provides no performance issues or technological barriers to doing e.g. ray-tracing or scene previews for graphics work. Even with respect to displaying it in a UI.
> Of course it does, where do you think IrisGL and Renderman were used for?
Not only are you confusing native UI stack for OS/HW capabilities, you are also seemingly confusing CPU/GPU workloads.
Nothing at all stops Renderman from running on Plan9, other than Renderman not having native Plan9 support.
Renderman does CPU ray-tracing, as ray-tracing still cannot be done in GPU (and no, RTX has no value in production quality rendering). The only GPU feature of Renderman is a post denoising filter, which will otherwise happily fall back to CPU.
> An OS architecture that doesn't do DMA for maximum hardware performance just isn't up to the game.
DMA has no effect on Renderman.
At this point, based on this history of this thread, I'll have to conclude that you have no idea what you're talking about at all.
I agree, I'd have liked to read about how he chose to set it up, there are 4 pi's in there, but for all I know, it could be that no more than a single one is actually doing anything.
But beware, these are cheap Chinese monitors available from alibaba company jwell (cheaper but with bad display) so be prepared for a bad experience with the remote.
On the upside, these monitors have Composite (for use with older computers) and can be powered by 12v (span 9-33v, not tested though)!
While this looks cool it is far from useful. I took time to build such cluster with the Pies and shortly after it landed in the box of things I may need in the future. It's just too slow to do anything useful. Watching stuff compile or even boot was like watching paint dry. It is much better to get a bunch of used NUC if you are space conscious or just get VMWare Workstation or similar tool to simulate number of machines on your desktop.
While no speed daemon, the RPi 4 B with 8 GB of RAM has good enough performance to use for aarch64 development. I can compile qemu in a reasonable time and run small VMs with it. You have to do a bit of work to upgrade the firmware and boot from UAS SSDs (https://rwmj.wordpress.com/2020/09/24/raspberry-pi-4-running...)
I think the utility is more with experimenting with distributed computing technologies in a relatively affordable way, with hardware that is familiar and simple to set up.
I made a small SAN, just for learning. Four pi4's serving two USB-SATA disks each via iSCSI to my NAS, which mounts them as a ZFS pool. The pi4's are connected to a switch with a 10G link to the NAS.
PI's are not really power efficient - their CPU is made on super-obsolete 40nm node. They have no sleep mode, or any low-power mode for that matter. Imagine how long a phone would last on that battery - its the same class of CPU.
An Intel compute stick or beelink mini pcs are also passively cooled and weigh no more.
Mobile phones don't have a keyboard, you can only consume with them. I produce things.
Those sticks don't have a company with it's own GPU and linux (Raspbian now RaspberryOS) behind them and they cost more per watt for performance I don't need.
Nope, if you compare theoretical Gflops/watt it's probably 30%, but in practice you'll only get maybe 15%. Memory speed is the bottleneck and you cannot get around that. DDR3 had lower latency than DDR6 and only bandwidth is increasing, and you have to ask bandwidth for what?
Sorry, I don't follow. Why would anything be getting faster if we are talking about hypothetically utilising (just) the power-saving gains from newer nodes?
Come on, it’s just plain beautiful. Sure it might not be very useful as a modern desktop, but just looking at the result, it looks like a peace of art to me.
The person probably enjoyed doing it, most likely didn’t harm anyone in the process, and possibly learnt some things on the road.
Not everything need to be useful beyond the joy of doing it.
You can get a bunch of refurbished micro PCs coming out of corporates on ebay for a cheap price. Pis are nice but with cases and power supplies they aren't so cheap and storage is a hassle.
Is there any setup required in software for the actual cluster? It looks like there are 4 Pis all running a Pi Plan 9 image, and they're wired together via ethernet. But where is the software setup to link them all together? Or does this just happen automatically on Plan 9?
We tried hosting Micro (micro.mu) on Pi last year as a test to create some sort of global distributed network for services. Micro is written in Go and as Plan 9 focuses everything as a file, micro focuses on everything as a service. This post is really fascinating. In some ways the popularity of it makes me think self hosted Pi for cloud services might be viable. Even to the point of designing a packaged solution for devs.
This is great. I’ve had a single machine running for a long time, and a lot of the knowledge is still around, but buried in ancient pages. Having someone write about it and do a little hardware project is sure to spark interest.
Of course those sd card image links are broken. I recommend using the 9front fork which is actively developed with a nearly daily active mailing list and irc channel. http://9front.org/releases/ (please use the torrent links if possible)
That's really interesting - do you know why there is a minimum?
Maximum lengths are easy to understand because of voltage drops, interference and impedance over long distances. And of course in the old CSMA/CD days multiple senders couldn't be separated by too large a distance.
So I went out to make my own cluster and realized that no, sadly, the author simply did not activate nor configure the other machines at all.
I still love plan 9 and still use it on the occasion, but I'd love more detail on how to actually set up a plan9 cluster and really 'get into' the power of distributed computing.