That said… we need the “lisp machine” of the future more than we need a recreation.
There is Mezzano [1] as well as the Interlisp project described in the linked paper and another project resurrecting the LMI software.
Currently working on an accurate model of the MIT CADR in VHDL, and merging the various System source trees into one that should work for Lambda, and CADR.
Sounds extremely interesting, any links/feeds one could follow the progress at?
The dream of running lisp on hardware made for lisp lives on, against all odds :)
Depends on what one means by that.
Dedicated hardware? I doubt that we’ll ever see that again, although of course I could be wrong.
A full OS? That’s more likely, but only just. If it had some way to run Windows, macOS or Linux programs (maybe just emulation?) then it might have a chance.
As a program? Arguably Emacs is a Lisp Machine for 2025.
Provocative question: would a modern Lisp Machine necessarily use Lisp? I think that it probably has to be a language like Lisp, Smalltalk, Forth or Tcl. It’s hard to put into words what these very different languages share that languages such as C, Java and Python lack, but I think that maybe it reduces down to elegant dynamism?
And of course .. https://tumbleweed.nu/lm-3 .
Seeing that not even "Original Gangster" Lisp Machine used Lisp ...
Both the Lambda and CADR are RISCy machines with very little specific to Lisp (the CADR was designed specifically to just run generic VM instructions, one cool hack on the CADR was to run PDP-10 instructions).
By Emacs you definitely mean GNU Emacs -- there are other implementations of Emacs. To most people, what the Lisp Machine was (is?), was a full operating system with editor, compiler, debugger and very easy access to all levels of the system. Lisp .. wasn't the really interesting thing, Smalltalk, Oberon .. share the same idea.
The current state is _very_ fast in simulation to the point where it is uninteresting (there are other things to figure out) to write something as a behavioral model of the '181/'182.
~100 microcode instructions takes about 0.1 seconds to run.
Since we're now building specialized hardware for AI, emergence of languages like Mojo that take advantage of hardware architecture and what I interpret as a renewed interest in FPGAs perhaps specialized hardware is making a comeback.
If I understand computing history correctly, chip manufacturers like Intel optimized their chips for C language compilers to take advantage of economies of scale created by C/Unix popularity. This came with the cost of killing off lisp/smalltalk specialized hardware that gave these high level languages decent performance.
Alan Kay famously said that people who are serious about their software should make their own hardware.
Totally agree.
Here's my idea: stick a bunch of NVRAM DIMMs into a big server box, along with some ordinary SDRAM. So, say, you get a machine with the first, say, 16GB of RAM is ordinary RAM, and then the 512GB or 1TB of RAM above that in the memory map is persistent RAM. It keeps its contents when the machine is shut off.
That is it. No drives at all. No SSD. All its storage is directly in the CPU memory map.
Modify Interim or Mezzano to boot off a USB key into RAM and store a resume image in the PMEM part of the memory map, so you can suspend, turn off the power, and resume where you were when the power comes back.
https://github.com/froggey/Mezzano
https://github.com/mntmn/interim
Now try to crowbar SBCL into this, and as many libraries and frameworks as can be sucked in. All of Medley/Interlisp, and some kind of convertor so SBCL can run Interlisp.
You now have an x86-64 LispM, with a whole new architectural model: no files, no disks, no filesystem. It's all just RAM. Workspace at the bottom, disposable. OS and apps higher up where it's nonvolatile.
I fleshed this out a bit here:
https://archive.fosdem.org/2021/schedule/event/new_type_of_c...
And here...
https://www.theregister.com/2024/02/26/starting_over_rebooti...
A similar comment applies to lm-3. Maybe it is built on a fork of the previous repo, it is hard to tell.