That said… we need the “lisp machine” of the future more than we need a recreation.
There is Mezzano [1] as well as the Interlisp project described in the linked paper and another project resurrecting the LMI software.
Currently working on an accurate model of the MIT CADR in VHDL, and merging the various System source trees into one that should work for Lambda, and CADR.
Sounds extremely interesting, any links/feeds one could follow the progress at?
The dream of running lisp on hardware made for lisp lives on, against all odds :)
Depends on what one means by that.
Dedicated hardware? I doubt that we’ll ever see that again, although of course I could be wrong.
A full OS? That’s more likely, but only just. If it had some way to run Windows, macOS or Linux programs (maybe just emulation?) then it might have a chance.
As a program? Arguably Emacs is a Lisp Machine for 2025.
Provocative question: would a modern Lisp Machine necessarily use Lisp? I think that it probably has to be a language like Lisp, Smalltalk, Forth or Tcl. It’s hard to put into words what these very different languages share that languages such as C, Java and Python lack, but I think that maybe it reduces down to elegant dynamism?
And of course .. https://tumbleweed.nu/lm-3 .
Seeing that not even "Original Gangster" Lisp Machine used Lisp ...
Both the Lambda and CADR are RISCy machines with very little specific to Lisp (the CADR was designed specifically to just run generic VM instructions, one cool hack on the CADR was to run PDP-10 instructions).
By Emacs you definitely mean GNU Emacs -- there are other implementations of Emacs. To most people, what the Lisp Machine was (is?), was a full operating system with editor, compiler, debugger and very easy access to all levels of the system. Lisp .. wasn't the really interesting thing, Smalltalk, Oberon .. share the same idea.
The current state is _very_ fast in simulation to the point where it is uninteresting (there are other things to figure out) to write something as a behavioral model of the '181/'182.
~100 microcode instructions takes about 0.1 seconds to run.
Since we're now building specialized hardware for AI, emergence of languages like Mojo that take advantage of hardware architecture and what I interpret as a renewed interest in FPGAs perhaps specialized hardware is making a comeback.
If I understand computing history correctly, chip manufacturers like Intel optimized their chips for C language compilers to take advantage of economies of scale created by C/Unix popularity. This came with the cost of killing off lisp/smalltalk specialized hardware that gave these high level languages decent performance.
Alan Kay famously said that people who are serious about their software should make their own hardware.
A similar comment applies to lm-3. Maybe it is built on a fork of the previous repo, it is hard to tell.