That is absolutely not an MCU class footprint. Anything with an "M" when talking about memory isn't really an MCU. For evidence I cite the ST page on all their micros: https://www.st.com/en/microcontrollers-microprocessors/stm32...
Only the very very high performance ones are >1MB of RAM.
A couple of years ago it was measured in bytes. Before the RP2040 is was measured in dozens of KiB now it's measured in MiB
While I agree that 16 MiB is on the larger side for now, it will only be a couple of years for mainstream MCUs having that amount on board
It really isn't. The RP2040 has 256KB RAM. Far away from 16MB.
>now it's measured in MiB
Where? Very few so far and mostly for image processing applications, and cap out at less than 8MB. And those are already bordering on SoCs instand of MCUs.
For applications where 8MB or more is needed, designers already use SoCs with external RAM chips.
>it will only be a couple of years for mainstream MCUs having that amount on board
Doubt very much. Clip it and let's see in 2 years who's right.
If you choose some arbitrary memory amount as the criterion it will be out of date by next year.
RP2040 is 264k, RP2350 is 520k.
I use NXP's rt1060 and rt1170 for work, and they have 1M and 2M respectively, still quite far away from 16M and those are quite beefy running at 500MHz - 1GHz.
I have typed message passing.. I write erlang wrapping gleam modules.. its pretty easy.
When you use a µC to make it cheap, then you don't want to use additional components.
Much of it can be worked around as you suggest.
in some of the realtime architectures i've seen, certain processes get priority, or run at certain Hz. but i've never seen this with the beam. afaik, it "just works" which is great most of the time. i guess you can do: Process.flag(:priority, :high) but i'm not sure if that's good enough?
Without real preemption, consistently meeting strict timing requirements probably isn't going to happen. You might possibly run multiple beams and use OS preemption?
That said these nomenclatures are a bit fuzzy these days.
I'm an Erlang fanatic, and have been since forever, paid for classes when it was Erlang Training & Consulting at the center of things, flew cross-country to take them, have the t-shirt, hosted Erlang meetups myself in downtown Chicago. I'm not prototyping a microcontroller application in Erlang if I can get it done any other way. It's committing to losing from the outset.
edit: I've always been hopeful for some bare-metal implementation that would at least almost work for cheap µcs, and there have been promising attempts in the past, but they seem to have gone nowhere.
We were doing all sorts of things wrong and not idiomatically, but things turned out ok for the most part.
The fun thing with restart strategies is if your process fails quickly, you get into restart escalation, were your supervisor restarts because you restarted too many times, and so on and then beam shuts down. But that happens once or twice and you figure out how to avoid it (I usually put a 1 second sleep at startup in my crashy processes, lol).
Ghost processes are easy-ish to find. erlang:processes() lists all the pids, and then you can use erlang:process_info() to get information about them... We would dump stats on processes to a log once a minute or so, with some filtering to avoid massive log spew. Those kinds of things can be built up over time... the nice thing is the debug shell can see everything, but you do need to learn the things to look for.
Grisp puts enough controls on the runtime that soft-realtime becomes hard-realtime for all intents and purposes, outside of problems that also cause errors in hard-realtime systems.
(Also, thanks Peer for being tremendously patient with a new embedded developer! That kind of friendly open chat is a huge draw to the Elixir community)
(And I would argue that the ESP32 is an MCU even in this configuration — mostly because it satisfies ultra-low-power-on-idle requirements that most people expect for "pick up, use, put down, holds a charge until you pick it up again" devices.)
So, sure, if you mean the kind of $0.07 MCU IC you'd stuff in a keyboard or mouse, that's definitely not going to be running Nerves (or any other kind of dynamic runtime. You need to go full bare-metal on these.)
But if you mean the kind of $2–$8 MCU IC you'd stuff in a webcam, or a managed switch, or a battery-powered soldering iron, or a stick vacuum cleaner with auto suction-level detection, or a kitchen range/microwave/etc with soft-touch controls and an LCD — use-cases where there's more-than-enough profit margin to go around — then yeah, those'll run Nerves just fine.
If you ship MCUs with 16MB of RAM routinely, you're either working with graphics or are actually insane.
A managed switch is very computy heavy, and does usually run a microprocessor with a full RTOS, if not Linux itself, which probably costs in the mult-dollar range. It's also not something most people have at home, outside of the switch built into their router. Everything elese you mentioned usually runs on microcontrollers with under 1 MiB of RAM. For example, Infineon's CYUSB306X series ASICs for webcams come in two RAM sizes: 256 KiB or 512 KiB, despite handling gigabits per second of data, and having an MCU at all isn't even necessary. (https://www.latticesemi.com/usb3#_CDED461DE15245C78A2973D4A4...) The Pinecil's Bouffalo BL706 MCU has 123 KiB of RAM, despite being a low-volume product where design time matters more than component cost. (https://wiki.pine64.org/wiki/Pinecil, https://en.bouffalolab.com/product/?type=detail&id=8) Microwave ovens are so high volume that they often don't even use packaged microcontrollers, mounting the die directly on the PCB, with an epoxy blob protecting it, and there's no way any would splurge on megabytes of SRAM. The most advanced microwave oven I've seen was from the 90's and definitely didn't splurge on an microprocessor. (https://www.youtube.com/watch?v=UiS27feX8o0)
A slow MCU with external memory could run anything, like booting into linux on an AVR (https://www.avrfreaks.net/s/topic/a5C3l000000BrFREA0/t392375) but it's going to be extremely slow and not practical for any commercial product, which if produced in any volume will have as little RAM as possible.
The Sub <$1 you refer do will have <100k of these. (STM32C0, G0 etc)
ST is actually moving away from the 2Mb MCUs, and instead offering ~1Mb with Octospi. I believe the intent is to use offboard flash if you want more. (e.g. the newer H7 variants)
What's so cool about BEAM is you can connect a repl and debug the program as it's running. It's probably the best possible system for discovering what's happening as things are happening.
In general, most JIT or VM can't even claim guaranteed latency. People that mix these concepts betray their ignorance while seeming intelligent.
FreeRTOS is small and feasible.
VxWorks if your budget is unconstrained.
LinuxRT kernel (used in LinuxCNC) with external context clocking, and or FPGA memory DMA overlap module (zynq SoC etc.)
Real-time is a specialized underpaid area, and most people have too abstract of an understanding of hardware to tackle metastability problems. =3