Hacker News new | past | comments | ask | show | jobs | submit login
Modernising RISC OS in 2020: is there hope for the ancient ARM OS? (liam-on-linux.livejournal.com)
111 points by lproven on Oct 10, 2020 | hide | past | favorite | 53 comments



I did the fun part of a RISC OS emulator 20 years ago - http://riscose.sourceforge.net/ - i.e. WINE but for RISC OS applications.

It got as far as being able to run the RISC OS C compiler, compile and run its own output for a few command line applications.

But then we got to the stage of needing a test suite. Then my friend & collaborator on the project (Chris Rutter) died at the age of 19, then I finished college, had to find work and lost most of my RISC OS connections & nostalgia.

There was definitely a rich seam of RISC OS software in 2000 that Linux desktop could have used - great text editors and word processors and Sibelius. But Linux got most of that pretty fast.

Also I don't think RISC OS would have escaped the trap of most desktop OSes - that a desktop OS with native applications became a niche, expensive taste. By 2010 most desktop computer users were looking for a host for a web browser, and RISC OS itself is still spectaularly unsuited to that - hostile code, multithreading, support for proprietary blobs (at least in 2000) etc. etc.

That doesn't stop me missing it though!


RISC OS is written mostly in ARM assembly, but it's a pretty complete (if dated) OS. I think it would be interesting to convert the sources to LLVM-IR and then abstract the hardware specific parts giving you a cross platform version of RISC-OS. Yeah, it'd be a lot of work ... but I think it'd be very cool.

Why?

25 years ago there were dozens of OS's: AIX, Ultrix/OSF-1/Tru64, HP/UX, DG/UX, SCO (yuck), MS-DOS, Dynix/PTX, Solaris/SunOS, Windows, MSDOS, Mac, VMS, RISC-OS, Amiga, BeOS etc. etc. Now we're down to a handful of OSs with a large amount of computing now converging on Intel/Linux -- FWIW, I have nothing against Linux, I've used it since the early Slackware / pre 1.0 kernel days.

But I do believe competition is a good thing and the current trend for OS (and processor) convergence is worrying to me. I'd like to see more innovation and diversity in this space.


Listing some I know of: linux windows macos freebsd redox fuchsia ios android(sharing the kernel but sufficiently different from linux) chromeos plus many more experimental but available os


There's also freedos, haiku, and reactos.


• linux <- an xNix (DEC PDP-7/PDP-11 OS) • windows <- modernised VMS (DEC VAX OS, the 32-bit PDP-11) • macos <- xNix • freebsd <- xNix • redox <- experimental, unfinished, no apps, xNix-like • fuchsia <- xNix in Go • ios <- basically macOS, an xNix • android <- a weird Linux • chromeos <- Linux

That's 2 so far. 2.5 if I'm being generous.

Do keep going, would you?


That's 2 families, not 2 operating systems. If you can tell me with a straight face that Debian, FreeBSD, and iOS are the same thing, then we're really not having the same conversation.


Unix is Unix is Unix. Compared to the diversity that is out there even now, and far more so, to that which _was_ out there 25y ago, all Unixes are the same OS, yes.

They are all one because the differences between them are trivial compared to their similarities. It doesn't matter if the kernel is monolithic or modular, or if the filesystem is case-preserving but not case-sensitive (NT, macOS). These are hidden technical details.

But most people now have only SEEN Unix and nothing else, so they think that these trivial implementation details -- like what is the default shell, or where are libraries kept -- are important differences. They aren't. They're unimportant decorative details.

When I talk about diversity, let's talk about some real non-xNix OSes I have owned, used, and worked with.

Assumption: shells

Imagine an OS with no shell. No command line at all. Shells are not a given. The idea of typing commands in text at a keyboard, hitting Return to send it for evaluation, getting an answer back, and acting according to that: that is an assumption, and it is one that came from 1960s mainframes.

The slightly more subtle idea that your input is sent character by character, and changing those can interrupt this -- e.g. Ctrl+C to cancel -- that's an assumption, too. It's from 1970s minicomputers, and indeed, the modern version is from one specific company's range: the Digital Equipment Corporation PDP series. The "enhanced" keyboard layout? Not IBM: DEC. The 6-dot-3 then 8-dot-3 letter filename thing? DEC. Filename plus extension at all? DEC.

Alternatives: classic MacOS. Atari TOS/GEM. ETH Oberon. Apple NewtonOS. Psion EPOC16 and EPOC32.

Assumption: configuration held in text files

Config files are an assumption. I have used multiple OSes with no config files at all, anywhere. Not hidden: nonexistent. The idea of keeping config in the filesystem is an artifact of one design school.

Other alternatives to it have included:

- a single global database, part of the OS, not visible in the filesystem at all.

- multiple databases, implemented as different parts of the OS design. One inside the kernel, one inside the filesystem structures of the OS.

- per-app databases, managed by OS APIs. So you don't write files or choose a format: you call the OS and give it things to store, or ask it what's there.

The upshot of these latter two kinds of design is that you get facilities like connecting a storage medium to the computer, and all its programs and data are instantly accessible to the user -- in menus, as associations, whatever. And when you eject a medium, it all neatly disappears again, reverting to previous values where appropriate.

Best example: classic MacOS.

Assumption: there is a filesystem. This is an integrated indexing system that stores data in blocks of auxiliary storage, where they can be found by name, and the OS will read data from them into RAM.

Filesystems are an assumption. Hierarchical filesystems are a bigger one.

Alternatives:

All data is in primary storage (IBM OS/400, AKA IBM i.)

Or, media can contain databases, managed by the OS but not accessible by name (Apple NewtonOS).

Or, the primary storage is objects in RAM, and saving to disk is accomplished by snap-shotting entire system state to auxiliary storage. (Example: Xerox Smalltalk.)

Or, the primary storage is lists of values in RAM, and as above, disks are mainly used for import/export and for holding state snapshots. (Example: Lisp machines.)

When you take a long view, in historical context -- not the narrow parochial one of the last decade or two -- then yes, these are all different implementations of near-identical Unix systems. You've seen one Unix, you've seen 'em all.

What we have today is a biculture: various flavours of Unix, and NT. That's it nothing else.

There used to be a verdant, rich forest here. Now, there is just a plantation, with fruit trees and pine trees. You're pointing at apple trees and pear trees and saying "look, they're different!" And at plums (and cherries and damsons and peaches) and oranges (and lemons and grapefruit and limes).

Well, yes they are, a little bit. But look deeper, and there are hard fruit, stone fruit, citrus fruit, nuts. But all deciduous broadleafed hardwoods.

There used to be creepers and vines and lianas and rattan, and grasses and orchids and bromeliads and ferns, and giant herbs, and little parasitic things, some with vast flowers, and mosses and lichens and liverworts.

There was a forest, and it's gone, and no, you cannot persuade me that a neat tidy little orchard with a handful of fruit trees is the same thing.


I would argue that the more CS education has become about training people to perform the jobs of the present (and thus work in the languages and systems of the present), it has become a kind of Unix training ground. There is certainly now a whole generation of programmers and CS graduates who never got to experience these other systems, and perhaps know very little about them (CS doesn't like to teach its own history).

I have argued elsewhere on this forum that the environment is ripe for completely new OSes, and that we have advantages over this "previous era." The first is the wide adoption and availability of data interchange formats (think JSON, XML, hell even TCP/IP) that were not as common / didn't exist in the heady days of RiscOS or classic Mac. This gets a new OS much further in the "compatibility problem." Our current "App Culture" also absolves us of the need for true application compatibility. For example, so long as your new OS has a somewhat standards-compliant web browser (no small task), you get perhaps up to 90% of the capabilities most people need.

Another factor is that, while our hardware has really fit itself to C and Unix in often frustrating ways, we have RISCV on the horizon. And though all the writing online about it seems to revolve around getting *nix systems to run (boring), there is enough openness for people to experiment without 40 years of cruft getting in the way.

People really should be asking "what is the point of an operating system? What is actually needed here?" A glance in the direction of Lisp machines or Smalltalk or Oberon would provide a lot of guidance in that regard.

I still believe that one day we can move past the teletype metaphor.


There's a myriad of interesting OSes which run a Linux kernel. Containers, for one (such as Docker). Qubes, for another. Third, Tails. Fourth, NixOS. And so on, and so forth. These 4 examples might each run a Linux kernel, but they each have very different design goals and aims.

I'm much more concerned about the lack of diversity in hardware space... specifically, the lack of open hardware.


Seven of the GP are unices also. They're not the same OS however.


Chrome OS is Linux too.


ChromeOS uses Linux kernel, userspace has nothing to do with Linux.

Even Crostini and ARC++ execute inside their own VM alongside their own kernel flavour.

With Google turning the Web into ChromeOS, it matters even less, given how many OSes Chrome runs on.


Harvey OS

Redox OS


Both xNix-like.

Harvey is Plan 9, which is Unix 2.0.

Redox is xNix redone in Rust, I think because they've never seen anything that isn't xNix.


The narrative of xNix-like is solely yours. The GP mentioned a bunch of Unix-like operating systems.


> No memory protection or hardware-assisted memory management

This isn’t true. There are many modern features missing from RISC OS but memory protection isn’t one of them.

Each process sees its memory at the same logical address, it doesn’t have any way of accessing other process’ memory.

The process’ memory starts at the address 0x8000. http://www.riscos.com/support/developers/riscos6/memory/logi...


From - err - memory, the OS used some of the memory protection in hardware, but it didn't use it with security in mind.

I'm pretty sure an application could rewrite important kernel tables below &8000. And if you were using any shared libraries, these were all implemented as kernel modules, with no guard rails.

So it was kinda safe against some accidental access errors, but not at all a secure environment, and definitely possible to blow up the whole machine.


Still better than the Amiga, where their was no memory protection at all. Basically the entire OS was built on sharing memory between processes through message passing...


> It's too late for virtual memory, and we don't really need it any more -- but the programming methods that allow virtual memory, letting programs spill over onto disk if the OS runs low on memory, are the same as those that enforce the protection of each program's RAM from all other programs.

Strictly speaking, I don't think this is true - virtual memory and process-based memory protection are separate issues. One could have a single-address-space OS where physical memory is identically mapped in each process' address space, while still preserving memory protection. A privileged process in such an OS might effectively be exempt from memory protection altogether, which would provide compatibility for legacy apps.


That's the difference between an MMU and MPU.

The first (MMU) adds virtual memory to the story, while the second (MPU) simply protects certain areas of memory.

https://www.geeksforgeeks.org/whats-difference-between-mmu-a...


[Author here]

OK, I will give you that one. :-)

But they are at least related concepts, no?


It was weird to me that you said nobody needs virtual memory.

On Unix, any time you mmap() or fork(). And loading code from disk looks very similar to the former.


It's an implementation detail of one type of OS.

I am interested in the bigger picture.

See my FOSDEM talk for more: https://liam-on-linux.livejournal.com/69099.html


Paul Fellows, mentioned by reference in the article as giving the talk on writing RISCOS, presented again at Virtual ABug earlier in the Summer. The talk is online here: http://abug.org.uk/index.php/2020/07/04/paul-fellows/


"A software emulation of 32-bit ARM would be needed, with perhaps a 10x performance drop."

Well, my Archimedes was 8Mhz, and my Pi is 1500Mhz, so we have some room there perhaps?


Don't forger the fact 187.5 times the frequency is not the only improvement. Today you also have 4x the cores, 8x the bits and hardware 4Kp60 HEVC en/de-coding. There probably are more things (like AES encryption, linear algebra and FPU) hardware-accelerated, I don't really know. This way your emulated Archimedes can probably do much much more than it was supposed to.


Gameboy advance emulators emulate 32bit arm, plenty of those around.


Which is fine, but the things we do with our home computers today are not the same things we did in the late 80s.


Maybe so, but I'd imagine most of the legacy 32-bit apps you'd be running in emulation here on your 64-bit RISC OS system are from the late 80s (or maybe the early-to-mid-90s), doing those late 80s things.


Plenty of them actually are.

A rusty Amiga 2000, with Internet connection could handle like 80% of the stuff I use my 2009 laptop for.


If the things we do with with our home computers are crunching in kernel space instead of in userland, someone's done something wrong.


The article's talking about running 32-bit applications in emulation, with a 64-bit native kernel.

> Then a rewrite of RISC OS for 64-bit ARM chips would require a 32-bit emulation layer for old apps to run -- and very slowly at that, when ARM chips no longer execute 32-bit code directly. A software emulation of 32-bit ARM would be needed, with perhaps a 10x performance drop.


The cost of simulating 32-bit ARM on 64-bit ARM is nothing like 10x if you try hard enough

https://www.research.manchester.ac.uk/portal/files/56078084/...


So I have been informed, in some detail, over on Lobste.rs.

I am glad to hear it. I like this little OS and I want to see it survive!


I saw some YouTube videos of RISC OS running on Pi.


[Author here]

I have RISC OS running on a Pi, and it's not the first one I've had, either.

https://twitter.com/lproven/status/1310304554395860996

http://blog.tynemouthsoftware.co.uk/2015/12/day-8-zx-spectru...

I don't just make this stuff up, you know. :-)

I still have my A305, too, and an A5000 as well.


Thanks to Castle's mysterious one big customer it has survived into its fourth decade.

I’d previously missed this piece of lore. Anyone got any more info on it? I’m trying to picture what critical system built on RISC OS would have required ongoing maintenance and am coming up blank.


Got to wonder whether it’s something like ncipher/Thales due to the van someren/aleph1 connection.


Don't know, but there was a mysterious entity that nearly bought the rump of Acorn IIRC to keep producing RiscPCs. Due to some fumble the deal fell through. I vaguely remember it being some kind of media-tech company. Probably the same mysterious entity.


I'm guessing Eidos's Optima video editing system? That was the last of the major RISC OS users as I remember it.


RISC OS is quite frankly dead in its current incarnation. And I say that as a somewhat prolific Acorn customer in the 80s and 90s. It was after all a quick hack because they weren’t going to deliver ARX. Some of the impedance mismatches between the OS and everything else in the universe are glaringly irritating today such as paths and file type metadata.

But a lot of the better ideas from the operating system should be carried forwards into something else. I’m not talking about sticking something on top of Linux but something completely different. Perhaps a resurrection of ARX and the UI concepts from RISC OS would be interesting.


Did you read the blog post? (I wrote it, BTW.)

That is more or less what I was arguing for, you see.


I wrote that then read the post if I’m honest. I wanted to compare notes :)


I see RISC OS has having one particular advantage: cooperative multitasking. You can basically turn it into a gigantic MCU with proper interrupts and timing.

It doesn't seem like it's a niche others are interested in, though.

Get a git client and GCC working with it and you have a pretty nifty real-time development platform.


Whenever I see RISC OS I always assume it's something for RISC V.

I know this is not about that actually, just makes me think about it.

But obviously it's for original RISC i.e. ARM. But anyway I still wonder if something like RISC OS could work on RISC V.

I saw a youtube demo of someone running RISC OS on Raspberry Pi and it was very snappy. Something older like that has had less time to acquire features (that can slow it down or just be bloat).

Besides eliminating bloat, I feel like RISC V is an opportunity to start with some new assumptions. So I am interested to hear how operating systems and software evolve for it. Linux is great but I believe that there is always room for a fresh approach after a few decades.

So I wonder if things like Fuchsia or MirageOS or new Rust things etc. will be targeted to RISC V.


> But obviously it's for original RISC i.e. ARM.

Not sure if you actually mean that, but to me it sounds like you're implying that ARM is the "original RISC".

The first RISC processor was called RISC-I and came out of Berkeley research in 1981.

Around the same time came out the MIPS cpu, from Standford university, by a group who shortly after founded a company that by 1985 commercialized the first RISC CPU, the R2000.

ARM, SPARC and PA-RISC came out just about the same time.

The ideas were brewing for some time, but I think it's safe to say historically that the name "RISC" was not invented by ARM and that the R in ARM references a pre-existing term that has been developed at Berkeley.


It really was the original RISC in a personal computer; you could actually buy one (especially in the UK); and run it at home; the others were things that lived in computer rooms and drove green screen terminals.


The original RISC was the IBM 801, in ~1975, or arguably even the CDC 6600 in 1965. It is hard to identify anything that originated in the RISC-1.


Didn't register windows originate at Berkeley?

Sure, the IBM 801, the CDC 6600 and others were forerunners, but it was the Berkeley research project that popularized the acronym IIRC.


Okay so it was just the most popular RISC and not the original one.

So good job on that correction but it doesn't seem necessary to downvote me. I thought I had some interesting comments about the future of operating systems on RISC.


Most popular hasn't always been accurate either. If you asked in the 90s there'd be alpha, ppc, sparc. There was a time when I would have put all of those as more relevant than ARM.


What is the point of this discussion for others not in the ecology?


[Author here]

"Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it."

-- George Santayana (The Life of Reason: The Phases of Human Progress)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: