Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would have been nice to know how the software side of the clustering was set up. How easy/painful was it to let the different pies know they were part of a single cluster, how easy/painful was it to have different programs seemlessly use resources from different pies, etc.


On plan9, all your services are file based. The difference between a cluster and a standalone setup is just whether you mount local instances of a service, or remote instances.

For a cluster, you'll usually have a cwfs/hjfs (persistent storage) server, a faktotum (auth) server, and one or more CPU servers, and possibly a terminal machine.

You can then freely use or connect to CPU servers. As everything is shared, and the GUI is also file-based, everything will feel local.

https://9p.io/wiki/plan9/Expanding_your_Grid/index.html


> everything will feel local.

Not in what concerns graphics rendering performance.


Depends on the latency which is the achilles heel of 9p and plan 9 in general. With 9p, for every T message there's an R message that is waited on before the next T is sent. So if there's 100ms of latency you have to wait 100ms between EVERY 9p message. Pain and suffering. The labs people never focused on performance but actually building a clean comprehensive system with the small team they had. It was a pragmatic approach for sure and compromises were made.

Though when using local ethernet, heavy graphics isn't that bad. I can play doom over local ethernet using drawterm to a cpu server which is actually running the doom processes. Of course blowing up the game screen makes it suffer as doom doesn't use dev draw.

Another thing to realize is that plan 9 graphics are done via dev draw which is a sort of 2d graphics engine that you load text and bitmaps into then issue commands to draw and move them around on screen. So a local dev draw on the terminal only receives draw commands for the loaded resources so there is some caching and heavy lifting done at the terminal end. It's not like vnc or rdp where you are viewing a remote frame buffer.


> With 9p, for every T message there's an R message that is waited on before the next T is sent.

This is entirely false.

The 9P protocol is asynchronous, can handle up to 16k outstanding requests, supports out-of-order responses, and includes support for arbitrary request cancellation,

The only thing thing that blocks to wait for an R message are blocking syscalls, but that doesn't stop other processes from sending further T messages.


Yes this is true, I was over simplifying it.

But this is the reason why drawterm or rcpu is not very usable over high latency connections.


Have you used a Plan9 UI recently?

The simple draw commands issued by Plan9 applications over /dev/draw by e.g. acme or rio shouldn't be cause for concern.

There is no accelerated graphics here, nor UIs that need it.


Exactly, because the infrastructure wasn't never there, as Carmack pointed out back in the day.


"wasn't never there"?

Acme and Rio are simple because they should be, not because they couldn't be anything else.

Although, you are correct that UIs have degraded extremely in tandem with the increase in available resources, which is very frustrating.


Yeah, using Plan 9 for the graphics rendering stuff Hollywood was using Irix and NeXTSTEP for, wasn't never on the team's roadmap, specially when on Inferno the UI was based on Tk.


Cinematic graphics rendering has nothing to do with UIs and the technologies used there.

Plan9 provides no performance issues or technological barriers to doing e.g. ray-tracing or scene previews for graphics work. Even with respect to displaying it in a UI.


Of course it does, where do you think IrisGL and Renderman were used for?

An OS architecture that doesn't do DMA for maximum hardware performance just isn't up to the game.


> Of course it does, where do you think IrisGL and Renderman were used for?

Not only are you confusing native UI stack for OS/HW capabilities, you are also seemingly confusing CPU/GPU workloads.

Nothing at all stops Renderman from running on Plan9, other than Renderman not having native Plan9 support.

Renderman does CPU ray-tracing, as ray-tracing still cannot be done in GPU (and no, RTX has no value in production quality rendering). The only GPU feature of Renderman is a post denoising filter, which will otherwise happily fall back to CPU.

> An OS architecture that doesn't do DMA for maximum hardware performance just isn't up to the game.

DMA has no effect on Renderman.

At this point, based on this history of this thread, I'll have to conclude that you have no idea what you're talking about at all.


I agree, I'd have liked to read about how he chose to set it up, there are 4 pi's in there, but for all I know, it could be that no more than a single one is actually doing anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: