We used Oberon (the system and the language) in our 3rd year CS OS course in Stellenbosch in the mid 90's
Interesting system, it's worth looking at since it has a few unexplored ideas.
(I was less impressed the the language).
I experienced it at about the same time as win3 / Win95 and X-Windows (before KDE/Gnome) and it compared favourably.
I'm glad this post is calling attention to it, but I don't think the post does it much justice.
The Zooming-UI and desktops happened later as a reaction to (today) more mainstream approaches.
I wouldn't call the interface a CLI - it wasn't REPL.
It needed a 3 button mouse and was very point and click.
TUI is more accurate.
It was graphical, with no desktop and non-overlapping panels (although in certain cases one panel could obscure a lower panel).
Think tiling window manager.
The mouse button use was chorded, so there were a lot of combinations, this meant one could highlight, select, copy, paste and execute commands without using the keyboard.
Any text could be selected and executed. Selecting 'Module.Function' would execute that function in that module.
It used a "Document Object Model", the base text class was expandable to handle more widgets than just native text.
Single user, single process with no real distinction between the system code and your own.
You could view and edit system modules and add your own.
Some of the research work at Stellenbosch was also related to Oberon, e.g. the Gneiss microkernel by Johan de Villiers [1,2], to which Jacques Elf and Frank von Riet ported Native Oberon [3]. Quite interesting and not well-known work IMHO.
[3] Jacques Elf and Frank van Riet, Porting Native Oberon to the Gneiss Microkernel - A Guideline for Future Ports, 2002.
http://norayr.am/papers/port.ps
For others wondering Oberon the language was created 1987 and has (from Wikipedia):
Case sensitive syntax with uppercase keywords
Type-extension with type test
Modules and separate compilation
String operations
Isolation of unsafe code
Support for system programming
I remember Pascal being a joy to use compared to C++ (how many months didn’t it take to be comfortable with reading c++ template exceptions?). I regard Wirth to be in the top three computer scientist of all time. Also one of the more humble ones.
For the life of me this Oberon/Plan 9 obsession with "all text can be turned into commands" escapes me. It seems so counterintuitive and just aesthetically displeasing. Like there's huge swaths of software where this sort of paradigm just makes no sense ("click to evaluate" your Excel file, and then paste i there?)
Like OK, cool, I get an email and I can middle-click it there to run it instead of.... I guess opening a text file and putting it there and hitting enter?
Not to mention that limiting yourself to text really limits how far this metaphor can go.
I do realize that there's limitations of the time, and that for the time it would be way better than, say, Win 3.1. Just like.... it feels like an evolutional dead-end (compared to say, Smalltalk-y OS's whose concepts have lived on in things like browser dev tools and SFDC' Apex environments).
I don't mean to yuck too much on anyone's yum, just feels like there are better ideas out there at this point.
The basic counter-example I was thinking of is Mathematica. In Mathematica you can paste an image as input, and have the input embedded into the document and used as an operator.
Similarly you could imagine files/documents/etc be represented as elements of rich text.
This is _not_ an easy problem however! As an example, if I had a document referring to `~/.emacs`, it could be:
- the path on the system of the author's machine
- the path on the system of the reader's machine
- the content of the file on the author's machine
- the content of the file on the reader's machine
- A sort of lisp data structure based on the content of the file on (author or reader's machine)
Basically you enter in to the "reference or value" problem on many things. Text doesn't have this problem because it has no solution to the value problem, for the most part, apart from like embedding a base64 blob into your text.
But hey, if you're building an entire OS anyways, and you have some rich text primitives, you could imagine having that in the base libraries and setting up some COM-like primitives all over and work from there.
I mean this is basically Word, at this point. But a programmer-y version of Word would be interesting from my perspective.
It's less to do with text really. When you click you're passing a message to the module. Powershell has a very rich object state as well, and the general idea is more like piping between objects (vs unix where everything piped IS text).
Another good example is TempleOS where you can render graphics into the cli and interact with them.
Yes, and the logical extremes, to my mind, were Symbolics Genera (a Lisp-machine OS) and Apple's Sk8 (a "HyperCard on steroids" rapid-application-development environment built on Macintosh Common Lisp).
In Genera, any output to a repl window was not dead text; it was a live and mouse-sensitive reference to the actual object in memory. You could mouse on it to get a dynamically-generated menu of operations supported by the pointed-to value, or call functions on it, or whatever.
In Sk8 you could grab an arbitrary widget anywhere on the screen and drop it on a MessageBox window to obtain a named variable referring to it. You could then operate on it in a manner similar to that described above.
I have their current software running locally. It's essentially a museum piece in its present state, but what I understand from Larry Masinter is that they've got the license unencumbered and intend to turn it back into a going project. I can't wait!
This is a really controversial pattern for GUIs. In one camp, the GUI is really a skin over the CLI that acts like a virtual user, translating GUI inputs into underlying CLI. In the other camp, the GUI is all there is (e.g., Windows) and there is no underlying OS that can be accessed via CLI: in fact, the CLI is a "fake GUI" (win32 apps written without a Window). I can't say which is better, but it is fascinating to see that this was an "original pattern".
Windows CLI apps aren't "fake GUI". There's a flag in the executable header that tells the loader (and from there the rest of the OS) whether to ensure a console is allocated for it and wire up stdin/out/err if they're not already wired up, but that's it.
An executable won't have its own window unless it calls CreateWindowEx one way or another, and any such window won't be very functional until the app starts pumping messages, which is work it needs to actively do, with GetMessage and DispatchMessage. Obviously a CLI-only app won't do these things, but it doesn't need to go out of its way to not do them; it doesn't need to fake anything, or hide or otherwise resort to subterfuge to conceal GUI elements.
There's a stronger argument to be made in some COM scenarios; e.g. single threaded apartment threading model creates a hidden window so it can use the message pump as a communication and serialization mechanism. But even here it's mostly just repurposing existing Windows stuff in ways which work well with existing GUI apps.
I'm (slowly!) working towards an OS where every user-invoked application is a GUI, most applications are many processes (more like Erlang than Unix), and every interprocess interaction, including GUI programs asking the OS to be drawn, is message-passing of data.
My theory is that if applications are written as servers that do message-passing, one can have a shell language that orchestrates the passing of messages between servers instead of the flow of bytes between CLI programs; the semantics of the shell language still need to worked out, though. e.g. does it need session types, or can one get reasonable behavior by structuring data specially to indicate "this is only part of a response that's being streamed."
On the GUI side, the idea of describing a UI in pure data (like HTML) seems very reasonable, and seems like it would make it much easier to quickly throw together small GUI programs. So the drawing part of an application would just be a process that sends the screen/compositor process a message describing the state of its window as a tree, and receives messages for events in response.
A big advantage is it makes the semantics of composing GUIs a lot more reasonable "replace this leaf of my tree with this other process' tree" is a simple-to-implement and simple-to-understand operation, but seems like it'd make sharing widgets way easier: widgets are just processes that render without the "this is a window" flag set, and you ask the compositor to put them into your window's tree. Events flow back to the widget, and each side can send the other messages easily. An application could also "proxy" for a widget, including over a network link, so you get fairly simple network transparency this way too.
At some point, this would be come close to the AppleScript protocol or Symbian OS.
> So the drawing part of an application would just be a process that sends the screen/compositor process a message describing the state of its window as a tree, and receives messages for events in response.
I've been toying with an interpretation of this here - https://github.com/Imaginea/inai - and kind of having fun with it .. and even built a prototype internal app using it. Super early stage and so stuff won't necessarily make sense at the outset .. or possibly ever. Thoughts welcome though.
> A big advantage is it makes the semantics of composing GUIs a lot more reasonable "replace this leaf of my tree with this other process' tree" ...
The "dom" service in Inai pretty much feels like that. I felt like an idiot to try and (for lack of a better expression) REST-ify the DOM, but it seemed to work to my surprise.
> An application could also "proxy" for a widget, including over a network link, so you get fairly simple network transparency this way too.
.. yeah due to the "REST" nature, this becomes pretty straightforward.
This is the basis for Composita ie. Every component has it's own message stack which allows a formal interface to the component to a) reference count b) allow static analysis for deterministic memory allocation (max stack depth required).
Performance is great, managed memory with no GC, multithreaded GUIs are possible etc.
Composita is a further development of A2. I think that it was a real missed chance that A2 wasn't chosen instead of Android as the ZUI and compiled modules would have been a great fit for mobile.
Oberon is a really nice and very well documented programming language and operating system to experiment with. Unfortunately not that much distributions are still available (most links seem to be dead).
It is nice until you start to write device drivers. I participated in porting it to an educational platform in Uni half a year ago, and the bit fiddling with MOD and DIV was really a pain.
This is no issue with my version of the OS since it is just an application and delegates the low-level details to my C++ code and Qt. And btw. there are some bit manipulation functions in the SYSTEM module, and it's easy to add even more.
This is cool. Reminds me of an attempt, I think in Windows 95? Era, of a desktop to mimic the computer at three original Jurassic Park movie. A 3D view of Windows, which you could turn and zoom to any layer, and layers were folders with files, which you could zoom into to activate. At the same time by rotating and zooming out, you could see the relationship of each folder to another, and the trees formed by these folders.
Unfortunately, I think our computers were too slow for it work very well.
I look forward to trying this out.
(But can we adapt it to Windows/Linux/Mac to make it usable?)
The original Jurassic Park movie showed a program called fsn which ran on Irix systems—which were also used to make digital effects for the movie itself. It has been cloned.
Interesting -- i wonder what kind of tricks evolved to better arrange windows? Zooming around without knowing "the lay of the window
landscape" sounds like it could get tricky as you expand the amount of window state to a very wide range.
I could imagine this kind of system might necessitate invention of actual good way of auto-rearrange-all-windows strategies?
I don't see windows themselves as a bad thing -- it's having to arrange them that I don't like and wish the os could define conventions for window placement to allow the system to "do what i want" more often without me having to fight it by moving a window around...
In addition, I think it depends on your input methods- a single mouse is going to be a little awkward, but touchscreen, dual mice or eg. Occulus controllers (ie. VR) open up several possibilities. Really Occulus should be great for CAD as it's a fast interface, but we're not there yet.
That's also one thing I was thinking of when I read about the zooming thing.
VR might be a new chance for Oberon to win out as a new paradigm. For a 3D based file/document manager, with 3D space instead of infinite zooming.
Not sure how that would work but it's clear that the current way of actually working in VR is not efficient. Usually some curved virtual screens are presented and you're left to feel around for your actual keyboard & mouse for input. Talking about stuff like Virtual Desktop and https://immersedvr.com . For example, why do you still have virtual 'monitors' in VR, why not have the windows free floating? It's only a retro constraint that makes no sense anymore.
It's like a vegetarian eating veggie burgers at McDonalds. Yeah it works, but it's mainly for 'conversion'/familiarity purposes and doesn't get the most out of the new paradigm.
The zoom idea sounds cool but for for you make this practical? Maybe a gesture trackpad?
Also I'm sure that for me this will end up a huuuge mess where I can no longer find anything. I'm the kind of person who fills up their desktop with icons until there's no space left. Doing this with actual open documents... The prospect is scary :)
Although the zoom idea is cool it’s nothing new. There have been many ui attempts at this like the zenzui ui in 2007. It was a rat race to find the best ui for these emerging devices called smartphones back in the day https://youtu.be/r12eUXJNbl8
The initial Oberon OS is indeed from 1987, but there are many Oberons. The original Oberon GUI was the everything-is-executable-text one, not the ZUI.
The ZUI was part of Thomas M. Frey's work for his dissertation "Bluebottle : A Thread-safe Multimedia and GUI Framework for Active Oberon" [2] (Bluebottle OS) submitted in 2005.
This was built on top of Pieter J. Muller's work for his dissertation "The Active Object System" [1] (AOS) submitted in 2002.
There are many more ETH (PhD and student) projects based on Oberon before and after (ARM Oberon, WinOberon, UnixOberon, Oberon.NET, ...) mainly in Jürg Gutknecht's group.
Thanks, this is really helpful!
Since there seems to be no central point of information on the myriad versions of oberon and no official website at eth anymore, it's really hard to get the lay of the land and find out which version does what and why.
...and A2 was perfectly positioned to be used as a smartphone OS instead of Android (2007)- compiled modules with GC, multithreaded ZUI etc in 2005. It's a shame really but I guess developer familiarity trumps architecture:(
I made similar UI for a tool. Here is what it looked like https://vimeo.com/manage/386239513/general. Initially I had plans of making an infinite drawable layer on top of which tools resided.
This article helped to realise that there are other computer interface ideas which is better and less popular.
Idea that is stuck in my mind for a long time , and evolving, is to have A blender like 3d environment which has zoomable and rotatable 3d interface that has hardware interrupts as signals and resources as spaces. When zoomed from distance it looks like a light source, on zoom closer it starts to appear as a computer hardware parts showing date and instruction flow. (Do not worry about frame rate yet)
The picture that immediately popped in my head when reading about the zooming features was factorio.
It would be interesting to spatially arrange files and folders in codebases. If the framrate is high, I imagine it could be pretty powerful. Remembering where a feature is on a plane seems easier than finding it in a directory tree.
The first thing that came to mind was figma (or sketch).
You get an infinite 2d plane and you can put any number of windows (artboards in sketch) and then on the left you can click them to find them when you’re zoomed out or in.
This is actually a much better experience than the modern desktop having to alt tab or expose or something
Is it really better? If your windows are visible when scaled down then it's just a mess.
Figma works okay because you're usually arranged in user flows so if nothing else, you can see the flow structure.
For reports, school assignments, code, receipts... things without any higher level structure, it's searching for a needle in a haystack. You'd be better off with a list sorted by date.
It is inspired by it as the wikipedia[0] article notes.
Both are inspired by Cedar/Mesa[1] from PARC.
There is a great video of Eric Bier showing some of cedars capabilities on youtube[2].
The infinitely large desktop is something I associate with older Unix file managers. Can't say I've ever liked it. Much better to have more than one desk or a single well organized desk than an infinitely large desk you have to walk across to find what you need.
There was some interesting stuff in that, like the 3D graphics in a terminal. But the guy also had some problems haha, not the mental but the racial profanity stuff.
I don't know enough about OS layer to say but it seems like he knew his stuff.
Yeah, the feature I liked the most out of what I've seen of TempleOS was hyperlinks being a core part of the system. Somewhat similar to the text-centric way Oberon does things, but perhaps more practical and definitely easier to grasp.
Interesting system, it's worth looking at since it has a few unexplored ideas.
(I was less impressed the the language).
I experienced it at about the same time as win3 / Win95 and X-Windows (before KDE/Gnome) and it compared favourably.
I'm glad this post is calling attention to it, but I don't think the post does it much justice.
The Zooming-UI and desktops happened later as a reaction to (today) more mainstream approaches.
I wouldn't call the interface a CLI - it wasn't REPL. It needed a 3 button mouse and was very point and click.
TUI is more accurate.
It was graphical, with no desktop and non-overlapping panels (although in certain cases one panel could obscure a lower panel). Think tiling window manager.
The mouse button use was chorded, so there were a lot of combinations, this meant one could highlight, select, copy, paste and execute commands without using the keyboard.
Any text could be selected and executed. Selecting 'Module.Function' would execute that function in that module.
It used a "Document Object Model", the base text class was expandable to handle more widgets than just native text.
Single user, single process with no real distinction between the system code and your own. You could view and edit system modules and add your own.