I've wondered if new/hobby OSes would fare better by starting out targeting a popular single board computer like a raspberry pi? A mostly fixed set of hardware to make/get drivers for and test your system on.
So if you want to develop your own driver for it, you have to second guess its documentation by reading at the driver's comments. This is bad.
But yes, raspi is a good platform if you are targeting arm.
As I'm also designing an OS, my biggest piece of advice for anyone seriously considering it is to target two archs at once, in parallel. Then adding a third becomes much easier.
The founder of SerenityOS created it as therapy and a pure “happiness” project. I am not sure actually using it was a real goal. So, he did the stuff he found interesting. That led him to writing display servers and web engines and crypto libraries and away from “real” drivers. He wrote his own C/C++ standard libraries and userland utilities but only enough driver code to make QEMU happy. It only ever ran in a VM on his Linux desktop. In the end, he found the web browser more interesting than the OS it was created for.
Very different project from Linux where what Linus wanted was an OS for his own computer. Linus was happy to leave the userland to others and still sticks to the kernel even now.
(And even then, its USB controller, for example, has no publicly-available datasheet. If you want to write your own driver for it, you have to read the Linux driver source and adapt it for your needs.)
That's as far as I got before discovering the Armbian project could handle all that for me. Coincidentally that's also when I discovered QEMU because 512MB was no longer enough to pip install pycrypto once they switched to Rust and cargo. My pip install that worked fine with earlier versions suddenly started crashing due to running out of memory, so I got to use Armbians faculties for creating a disk image by building everything on the target architecture via QEMU. Pretty slick. This was for an Orange Pi.
It's basically what Apple does and has done from the start as far as I can tell. The only breakthrough consumer Unix-like thing out there.
System76 is another example of (almost) that.
Frame.work comes close although they don't focus on the OS as much.
To be fair, his career was heavily focused on browser development before he ended up in a period of unemployment (I can't recall the exact circumstances), at which point he developed SerenityOS as a means of meditation/to give him purpose.
He still works on the OS, he's just more fulfilled working in a realm he specializes in and has pivoted focus there.
You can follow his monthly SerenityOS YouTube updates leading up to the Ladybird announcement for a more detailed rundown.
IS that the reason for the full screen of colors before you see the boot sequence? Never thought about that.
But it's not. Over time they've revised the SOC (processor) and gone from 32 to 64 bit capability. The latest - Pi 5 - has totally re-architected the I/O subsystem, putting most I/O functions on their RP1 chip and connecting that to the SOC using PCIE.
And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
All of the OSs I'm aware of that run on the Pi depend on "firmware" from the Raspberry Pi folk. Looking at the files in the folder that holds this stuff, it's pretty clear that every variant of the Pi has a file that somehow characterizes it.
That, combined with general flakiness (eg, power delivery issues for peripherals on older pis), and you end up with users blaming the software for hardware issues.
It's just not very fun to develop an OS for.
For most things, build on the work of others, but every now that then, we should check if those "giants" actually found the best solution, so that we may chance direction if we're heading down the wrong path.
That's not very different from depending on the BIOS/UEFI firmware on a PC; the main difference is that older Raspberry Pi didn't have a persistent flash ROM chip, and loaded its firmware from the SD card instead. Newer Raspberry Pi do have a persistent flash ROM chip, and no longer need to load the firmware from the SD card (you can still do it the old way for recovery).
> And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
Even this is not that unusual; AFAIK, in recent AMD x86 processors, a separate built-in ARM core takes control on power up, initializes the memory controller, and loads the initial code for the CPU. The unusual thing on Raspberry Pi is only that this separate "bootstrap core" is also used as the GPU.
The "color gamut" display, as you call it, is a GPU test pattern, created by start.elf (or start4.elf, or one of the other start*.elf files, depending on what is booting). That 3rd stage bootloader is run by the GPU which configures other hardware (like the ARM cores and RAM split).
Kernel is still in early stages but progress is steady - it's "quietly public". https://github.com/oro-os
Good point. I think most modern motherboards (and all server boards) have a "management engine" that prepares the board on power up and even serves some functions during operation. I believe that that's what supports IPMI/iDRAC even when the host is running.
I don't think that changes the situation WRT the variety of H/W for the Pis through their history and even at present.