Terminal emulators display grids of characters using all sorts of horrifying protocols.
Web browsers display html generated by other programs.
The web solves problems that are almost impossible to properly solve with a terminal, particularly with rendering of more complicated languages and display and interaction with sophisticated visualisations.
Pushing the terminal further while maintaining compatibility, performance and avoiding a terminal war with incompatible protocols is going to be a struggle.
Some lesson must surely be drawn from this about incremental adoption.
There's even more under the "Updates archive" expando in that post.
It was a pretty compelling prototype. But after I played with Polyglot Notebooks[1], I pretty much just abandoned that experiment. There's a _lot_ of UI that needs to be written to build a notebook-like experience. But the Polyglot notebooks took care of that by just converting the commandline backend to a jupyter kernel.
I've been writing more and more script-like experiments in those ever since. Just seems so much more natural to have a big-ol doc full of notes, that just so happens to also have play buttons to Do The Thing.
[1]: https://marketplace.visualstudio.com/items?itemName=ms-dotne...
https://commons.wikimedia.org/wiki/File:DEC_VT100_terminal.j...
I may disappoint you with the fact that IBM PC-compatible computers have replaced devices of that class. We can only observe certain terminal emulators in some operating systems. There have been many attempts to expand the functionality of these emulators. However, most features beyond the capabilities of VT100 have not caught on (except UTF-8 support). I do not believe that anything will change in the foreseeable future.
- Emacs (inherited from lisp machines?). A VM which is powered by lisp. The latter make it easy to redefine function, and commands are just annotated functions. As for output, we have the buffer, which can be displayed in windows, which are arranged in a tiling manner in a frame. And you can have several frames. As the buffer in a window as the same grid like basis as the terminal emulator, we can use cli as is, including like a terminal emulator (vterm, eat, ansi-term,...). You can eschew the terminal flow and use the REPL flow instead (shell-mode, eshell,...). There's support for graphics, but not a full 2d context.
- Acme: Kinda similar to emacs, but the whole thing is mostly about interactive text. Meaning any text can be a command. We also have the tiling/and stacking windows things that displays those texts.
I would add Smalltalk to that, but it's more of an IDE than a full computing environment. But to extend it to the latter would still be a lower effort than what is described in the article.
one of the strange things to me about the terminal landscape is how little knowledge sharing there is compared to other domains i'm familiar with. iTerm has a bunch of things no one else has; kitty influenced wezterm but otherwise no one else seems to have valued reflection; there's a whole bunch of extensions to ANSI escapes but most of them are non-standard and mutually incompatible. it's weird. if i compare to something like build systems, there's a lot more cross-pollination of ideas there.
What sort of "horrifying protocols"? The entire VT220 state machine diagram can be printed on a single letter- or A4-sized sheet of paper. That's the complete "protocol" of that particular terminal (and of any emulator of it). Implementing the VT220 with a few small extensions (e.g., 256 colors or 24-bit colors) wouldn't be too onerous. I implemented such a parser myself in probably a few hundred lines of code, plus a bit more to do all of the rendering (drawing glyphs directly to a bitmapped display, with no libraries) and handling user input from a keyboard. You'd have a difficult time properly parsing and rendering a significant subset of HTML in less than a few _thousand_ lines of code.
Edit to add: terminal emulators often implement other terminals like VT420, but the VT220 is enough for the vast majority of terminal needs.
Don't get me wrong, I'd be quite interested in a vintage computing discussion on the evolution of VT-100/220 etc terminal protocols. There were some interesting things done into the 90s. That's actually what I clicked in expecting. Of course, those were all supplanted by either XWindows (which I never got to use much) or eventually HTML/CSS. And if we're talking more broadly about structured page description languages, there's no shortage of alternatives from NAPLPS to Display Postscript.
The last thing a command-line terminal needs is a Jupyter Notebook-like UI. It doesn't need to render HTML; it doesn't need rerun and undo/redo; and it definitely doesn't need structured RPC. Many of the mentioned features are already supported by various tooling, yet the author dismisses them because... bugs?
Yes, terminal emulators and shells have a lot of historical baggage that we may consider weird or clunky by today's standards. But many design decisions made 40 years ago are directly related to why some software has stood the test of time, and why we still use it today.
"Modernizing" this usually comes with very high maintenance or compatibility costs. So, let's say you want structured data exchange between programs ala PowerShell, Nushell, etc. Great, now you just need to build and maintain shims for every tool in existence, force your users to use your own custom tools that support these features, and ensure that everything interoperates smoothly. So now instead of creating an open standard that everyone can build within and around of, you've built a closed ecosystem that has to be maintained centrally. And yet the "archaic" unstructured data approach is what allows me to write a script with tools written decades ago interoperating seamlessly with tools written today, without either tool needing to directly support the other, or the shell and terminal needing to be aware of this. It all just works.
I'm not saying that this ecosystem couldn't be improved. But it needs broad community discussion, planning, and support, and not a brain dump from someone who feels inspired by Jupyter Notebooks.
With lisp REPLs one types in the IDE/editor having full highlighting, completions and code intelligence. Then code is sent to REPL process for evaluation. For example Clojure has great REPL tooling.
A variation of REPL is the REBL (Read-Eval-Browse Loop) concept, where instead of the output being simply printed as text, it is treated as values that can be visualized and browsed using graphical viewers.
Existing editors can already cover the runbooks use case pretty well. Those can be just markdown files with key bindings to send code blocks to shell process for evaluation. It works great with instructions in markdown READMEs.
The main missing feature editor-centric command like workflow I can imagine is the history search. It could be interesting to see if it would be enough to add shell history as a completion source. Or perhaps have shell LSP server to provide history and other completions that could work across editors?
Yes, this is the work. https://becca.ooo/blog/vertical-integration/
- https://arcan-fe.com/ which introduces a new protocol for TUI applications, which leads to better interactions across the different layers (hard to describe! but the website has nice videos and explanations of what is made possible)
- Shelter, a shell with reproducible operations and git-like branches of the filesystem https://patrick.sirref.org/shelter/index.xml
I open a poorly aligned, pixelated PDF scan of a 100+ year old Latin textbook in Emacs, mark a start page, end page, and Emacs lisp code shells out to qpdf to create a new smaller pdf from my page range to /tmp, and then adds the resulting PDF to my LLM context. Then my code calls gptel-request with a custom prompt and I get an async elisp callback with the OCR'd PDF now in Emacs' org-mode format, complete with italics, bold, nicely formatted tables, and with all the right macrons over the vowels, which I toss into a scratch buffer. Now that the chapter from my textbook in a markup format, I can select a word, immediately pop up a Latin-to-English dictionary entry or select a whole sentence to hand to an LLM to analyze with a full grammatical breakdown while I'm doing my homework exercises. This 1970s vintage text editor is also a futuristic language learning platform, it blows my mind.
Atuin runbooks (mentioned in the article) do this! Pretty much anywhere we allow users to start typing a shell command we feed shell history into the editor
But just showing a browser like Jupyter would be very useful. It can handle a wide variety of media, can easily show JS heavy webpages unlike curl, and with text option to show text based result like w3m but can handle JS, it will be more useful.
browser google.com/maps # show google map and use interactively
browser google.com/search?q=cat&udm=2 # show google image result
browser --text jsheavy.com | grep -C 10 keyword # show content around keyword but can handle JS
vim =(browser --text news.ycombinator.com/item?id=45890186) # show Hacker News article and can edit text result directly)Why? Well one reason is escape sequences are really limited and messy. This would enable everyone to gradually and backward-compatibly transition to a more modern alternative. Once you have a JSON-RPC channel, the two ends can use it to negotiate what specific features they support. It would be leveraging patterns already popular with LSP, MCP, etc. And it would be mostly in userspace, only a small kernel enhancement would be required (the kernel doesn’t have to actually understand these JSON-RPC messages just offer a side channel to convey them).
I suppose you could do it without any kernel change if you just put a Unix domain socket in an environment variable: but that would be more fragile, some process will end up with your pty but missing the environment variable or vice versa
Actually I’d add this out-of-band JSON-RPC feature to pipes too, so if I run “foo | bar”, foo and bar can potentially engage in content/feature negotiation with each other
Passing the process ID and user ID might be helpful to improve security of the terminal emulator, too. If the sidechannel is a UNIX socket then it will do this (with SCM_CREDENTIALS), as well as pass file descriptors (with SCM_RIGHTS).
Maintaining a high level of backwards compatibility while improving the user experience is critical. Or at least to me. For example, my #1 fristration with neovim, is the change to ! not just swapping the alt screen back to the default and letting me see and run what I was doing outside of it.
We generally like the terminal because, unlike GUIs it's super easy to turn a workflow into a script, a manual process into an automated process. Everything is reproducible, and everything is ripgrep-able. It's all right there at your fingertips.
I fell in love with computers twice, once when I got my first one, and again when I learned to use the terminal.
Independent of the rest, I would love for more terminal emulators to support OSC 133.
No need for content/feature negotiations.. machine readable just defaults to JSON unless there's a --format flag for something else. And if you add that on the generation-side of the pipe, you just need to remember to put it on the consumer-side.
Unless someone creates a cross-platform, open source, modern and standards compliant terminal engine [1].
Missing out on inline images and megabytes of true-color CSI codes is a feature, not a bug, when bandwidth is limited.
If you want jupyter, we have jupyter. If you want HTML, we have several browsers. If you want something else, make it, but please don’t use vt220 codes and call it a terminal.
The article is just wish-listing more NIH barbarism to break things with. RedHat would hire this guy in a heartbeat.
Got a link to what you meant? This is pretty hard to search for.
> - Emacs
One thing in common with emacs, jupyter, vscode.. these are all capable platforms but not solutions, and if you want to replace your terminal emulator by building on top of them it's doable but doesn't feel very portable.
I'd challenge people that are making cool stuff to show it, and then ship it. Not a pile of config + a constellation of plugins at undeclared versions + a "simple" 12-step process that would-be adopters must copy/paste. That's platform customization, not something that feels like an application. Actually try bundling your cool hack as a docker container or a self-extracting executable of some kind so that it's low-effort reproducible.
My biggest gripe with it is that it quickly ends up becoming an actual production workload, and it is not simple to “deploy” and “run” it in an ops way.
Lots of local/project specific stuff like hardcoded machine paths from developers or implicit environments.
Yes, I know it can be done right, but it makes it sooooooooo easy to do it wrong.
I think I can’t not see it as some scratchpad for ad-hoc stuff.
There are problems with using JSON for this; other formats would be better. JSON needs escaping, cannot effectively transfer binary data (other than encoding as hex or base64), cannot use character sets other than Unicode, etc. People think JSON is good, but it isn't.
Also, you might want to use less or other programs for the text output, which might be the primary output that you might also want to pipe to other programs, redirect to a file (or printer), etc. This text might be separate from the status messages (which would be sent to stderr; these status messages are not necessarily errors, although they might be). If you use --help deliberately then the help message is the primary message, not a status message.
(In a new operating system design it could be improved, but even then, JSON is not the format for this; a binary format would be better (possibly DER, or SDSER, which is a variant of DER that supports streaming, in a (in my opinion) better way than CER and BER does).)
(Another possibility might be to add another file descriptor for structured data, and then use an environment variable to indicate its presence. However, this just adds to the messiness of it a little bit, and requires a bit more work to use it with the standard command shells.)
It's part of plan9:
Answering --help with JSON is a good example, how bad is it really if the response is JSON? Well, using less works fine still and you can still grep if you want simple substring search. Wanting a section is probably more common, so maybe you'd "grep" for a subcommand with `jq .subcommand` or an option with `jq .subcommand.option`, and maybe get yourself a fancier, JSON-friendly version of less that handles escaped quotes and newlines. Tables and tab-or-space delimited output overflow char limits, force the command-generator to figure out character wrapping, and so on. Now you need a library to generate CLI help properly, but if you're going to have a library why not just spit JSON and decouple completely from display details to let the consumer handle it.
Structured output by default just makes sense for practically everything except `cat`. And while your markdown files or csv files might have quoted strings, looking at the raw files isn't something people really want from shells or editors.. they want something "rendered" in one way or another, for example with syntax highlighting.
Basically in 2025 neither humans nor machines benefit much from unstructured raw output. Almost any CLI that does this needs to be paired with a parser (like https://github.com/kellyjonbrazil/jc) and/or a renderer (like https://github.com/charmbracelet/glow). If no such pairing is available then it pushes many people to separately reinvent parsers badly. JSON's not perfect but (non-minified) it's human-readable enough to address the basic issues here without jumping all the way towards binary or (shudder) HTML
http://www.youtube.com/watch?v=dP1xVpMPn8M
> *I'd challenge people that are making cool stuff to show it, and then ship it.
Emacs has the following builtin and more
- Org mode (with babel): Note taking and outliner, authoring, notebooks, agenda, task management, timetracking,...
- Eshell: A shell in lisp, similar to fish, but all the editor commands are available like cli tools.
- comint: All things REPL (sql client, python,...)
- shell-command and shell-mode: The first is for ad-hoc commands, the second is derived from comint and give you the shell in an REPL environment (no TUI).
- term: terminal emulator, when you really want a tui. But the support for escape sequences is limited, so you may want something like `eat` or `vterm`.
- compile: all things build tools. If you have something that report errors and where those errors are located in files, then you can tie it to compile and have fast navigation to those locations.
- flymake: Watch mode for the above. It lets you analyze the current file
- ispell and flyspell: Spell checking
- dired: file management
- grep: Use the output of $grep_like_tool for navigatoin
- gnus and rmail: Everything mail and newsgroup.
- proced: Like top
- docview: View pdf and ps files, although you can probably hack it to display more types.
- tramp: Edit files from anywhere...
And many more from utilities (calc, calendar) and games to low level functions (IPC, network,...) and full blown applications (debugger, MPD client). And a lot of stuff to write text and code thhings. All lisp code, with nice documentation. That's just for the built-in stuff.
If not for the state of the Web, you could probably just went straight from init to Emacs.
> JSON's not perfect but (non-minified) it's human-readable enough to address the basic issues here without jumping all the way towards binary or (shudder) HTML
It does not address most of the real issues. Programs that deal with pictures, sounds, non-Unicode text, structures of the kinds that JSON does not have, etc, will not do as well; and the input/output will involve converting escaping. (One format that I think is better is DER, although, it is binary format. I did write a program to convert JSON to DER, though.)
To successfully argue that it's just perfect as a terminal emulator, I think you need to find a way to ship it in exactly that configuration. That would mean that you open it up to a shell prompt with a dollar sign, you can hit ctrl-t to get a new terminal tab. Clicking URLs should open them in a browser without having to copy/paste. Speaking of copy/paste, that should work too, and ctrl-e, and ctrl-a, etc, etc.
It so happens that right now one is synonymous with the other but there's no instrinsic requirement.
There's probably something to be said for the inherent constraints imposed by the terminal protocol, but, again, we can build the same things without that.
> Many of these implementations are ad-hoc, one-off solutions. They aren't using any shared library or codebase.2 Terminal emulation is a classic problem that appears simple on the surface but is riddled with unexpected complexities and edge cases.3 As a result, most of these implementations are incomplete, buggy, and slow.4 [1]
(I mean, it's possible html/css deserves to be called horrible also but they produce an undeniably superior result)
With terminals, you have the escapes sequences, the alternate screen, the shell capabilities. With Emacs, you have a lisp VM with a huge library of functions and buffers. I still use a normal terminal like xterm and Terminal.app, but I have eat installed and it's working great.
Its flexibility is beyond imagination. Programs can emit anything from simple numbers/vectors/matrices to medias (image, sound, video, either loaded or generated) to interactive programs, all of which can be embedded into the notebook. You can also manipulate every input and output code blocks programmatically, because it's Lisp, and can even programmatically generate notebooks. It can also do typesetting and generate presentation/PDF/HTML from notebooks.
What people have been doing w/ Markdown and Jupyter in recent years has been available in Mathematica since (at least) 1-2 decades ago. FOSS solutions still fall short, because they rely on static languages (relative to Lisp, of course).
I mean, really, it's a technological marble. It's just that it's barred behind an high price tag and limited to low core counts.
Maybe it is an API. Maybe the kernel implements this API and it can be called locally or remotely. Maybe someone invents an OAuth translation layer to UIDs. The API allows syscalls or process invocation. Output is returned in response payload (ofc we have a stream shape too).
Maybe in the future your “terminal” is an app that wraps this API, authenticates you to the server with OAuth, and can take whatever shape pleases you- REPL, TUI, browser-ish, DOOM- like (shoot the enemy corresponding to the syscall you want to make), whatever floats your boat.
Heresy warning. Maybe the inputs and outputs don’t look anything like CLI or stdio text. Maybe we move on from 1000-different DSLs (each CLI’s unique input parameters and output formats) and make inputs and outputs object shaped. Maybe we make the available set of objects, methods and schemas discoverable in the terminal API.
Terminals aren’t a thing of the 80s; they’re a thing of the early 70s when somebody came up with a clever hack to take a mostly dumb device with a CRT and keyboard and hook it to a serial port on a mainframe.
Nowadays we don’t need that at all; old-timers like me like it because it’s familiar but it’s all legacy invented for a world that is no longer relevant. Even boot environments can do better than terminals today.
A "barely better" version of something entrenched rarely win (maybe only if the old thing not get updaters).
This is the curse of OpenOffice < MS Office.
This is in fact the major reason:
> Great, now you just need to build and maintain shims for every tool in existence
MOST of that tools are very bad at ux! so inconsistent, weird, arcane that yes, is MADNESS to shim all of them.
Instead, if done from first principles, you can collapse thousands of cli arguments, options, switched and such things in few (btw a good example is jj vs git).
This is how could be: Adopt an algebra similar to the relational model, and a standardized set of most things millions of little tools have (like the commands help, sort, colors, input/output formats, etc) and then suddenly you have a more tractable solution.
ONLY when a tool is a total game changer people will switch.
And what about all the other stuff? In FoxPro (that in some ways show the idea you just preen `!` and then run the shell command you need. That is enough (editors and such? Much better to redo in the new way, and everyone knows that vim and emacs fan never change ways)
This is Powershell. It’s a cool idea for sure. One thing I’ve noticed though is that it becomes closer to a programming language and further away from scripting (ie you have to memorize the APIs and object shapes). And at that point, why would you write the program in a worse programming language?
By comparison, I’ve noticed even windows-leaning folks do a better job remembering how to delete files and find files than doing so through cmd.exe or powershell. I think that’s because you can run the command to see the output and then you know the text transformation you need to apply for the next step whereas powershell shows you formatted text but objects in the pipe.
Maybe a better terminal that provided completion for commands with AI support and a uniform way to observe the object shapes instead of formatted text might mitigate this weakness but it is real today at least imho.
You are better off maintaining what already works. Either way why do you want to migrate when things are just working fine as is?
I think because we already have non-text based terminal succesors.
I think there is interest in a succesor to text-bassd because a lot of people both like them but the space has been rather stagnant for a while.
To put it bluntly, what if its nothing like you ever imagined isn't all that interesting as speculation because it doesn't commit to any choices. The proposal has to be imaginable to be interesting.
Yes, you effectively are, and the current unstructured buggy mess is "just works" for you.
> But it needs broad community discussion, planning, and support,
Where was this when all the historic mistakes were made? And why would fixing them suddenly needs to overcome this extra barrier?
But trying to add a fourth now, would likely break too many things; some software will assume any fd > 2 is free for it to clobber.
The IBM mainframe operating system z/OS (formerly MVS), classically instead of numbers for inherited descriptors, the API uses names (DDNAMEs)-that would have made adding a new one a lot easier. But decades too late for that in Unix land, and eventually MVS added the Unix way too for Unix compatibility-the classic API still uses DDNAMEs, but many apps now use the file descriptor-based z/OS Unix API instead
1. Install Emacs
2. git clone --depth 1 https://github.com/doomemacs/doomemacs ~/.config/emacs
3. ~/.config/emacs/bin/doom install
Now open Emacs and stuff just works. You can customize it later if you want.
I agree with your general point. People mostly want stuff that just works. Very few want to become experts (and nobody can be an expert in everything.)
Somewhat true. However it's easy to explore what methods and properties are available. Just add `| gm` (Get-Member) to the end of your pipeline to see what you're dealing with and what's available.
1. a full-fledged programming language
2. no namespacing (nothing is private)
3. no modern GUI concepts to take into account (no CSS `flex-direction`...)
4. no edit-compile-run cycle
5. people have written extensions for many decades
6. people always write extensions with the possibility in mind that their extensions may be extended
Then you can probably see how it works, with just your imagination!
Of course there's a number of epiphanies that may be needed... Like how the principle "compose many small Unix programs with text as the universal interface" is just like "compose many functions with return values as the universal interface", or that it isn't an editor and more like a terminal (with integrated tmux-like functionality) that you decided to turn into an editor, or that an editor waiting for text entry is just stuck in a `while`-loop of reading the next input character, what even is a shell, what even is a computer, etc etc.
That is typically not the job of terminals, but of programs. fbi, omxplayer, etc exist.
Entirely agree. Stdio text (which is really just stdio bytes) deeply limits how composable your shell programs can be, since data and its representation are tightly coupled (they're exactly the same). I wrote a smidgin here[0] on my blog, but take a look at this unix vs. PowerShell example I have there. Please look beyond PowerShell's incidental verbosity here and focus more deeply on the profoundly superior composition that you can only have once you get self-describing objects over stdio instead of plain bytes.
$ # the unix way
$ find . -name '*.go' -not -name '*_test.go' -ctime -4 -exec cat {} \; | wc -l
7119
$ # the powershell way
$ pwsh -c 'gci -recurse | where {($_.name -like "*.go") -and ($_.name -notlike "*_test.go") -and ($_.LastWriteTime -gt (get-date).AddDays(-4))} | gc | measure | select -ExpandProperty count'
7119
[0] https://www.cgl.sh/blog/posts/sh.html> fbi, omxplayer, etc exist.
https://github.com/Julien-cpsn/desktop-tui
It is incomplete but takes what is almost a side aspect of TWIN and runs with it.
https://github.com/cosmos72/twin
TWIN is nearly 20 now and does quite a lot. It even has a Wikipedia page.
https://en.wikipedia.org/wiki/Twin_(windowing_system)
It runs on lots more OSes than just Linux.
and all it took was a deep understanding of software development, experience with lisp and a bunch of your own time coding and debugging! what a piece of software!
This is why I wrote this:
https://www.theregister.com/2025/06/24/tiling_multiplexers_s...
Trying to bring a bunch of related tools together in one place and compare and contrast them.
It is very hard to explain Arcan but I tried:
https://www.theregister.com/2022/10/25/lashcat9_linux_ui/
I talked to Bjorn Stahl quite a bit before writing it, but he is so smart he seems to me to find it hard to talk down to mere mortals. There's a pretty good interview with him on Lobsters:
https://lobste.rs/s/w3zkxx/lobsters_interview_with_bjorn_sta...
You really should talk to him. Together you two could do amazing things. But IMHO let Jupyter go. There's a lot more to life than Python. :-)
Terminals are not "text oriented". They are based on a bidirectional stream of tokens - that can be interpreted as text, or anything else.
That simplicity allows for Unix-style composition. If you make the output something different, then the receiving program will need to be able to parse it. The Amiga OS had some interesting ideas with different data types as system extensions - you'd receive "an image" instead of a JPEG file and you could ask the system to parse it for you. In any case, that's still forcing the receiving program to know what it's receiving.
One way to add some level of complexity is to add JSON output to programs. Then you can push them trough `jq` instead of `grep`, `sed`, or `awk`. Or push it through another tool to make a nice table.
> it’s all legacy invented for a world that is no longer relevant.
I hear that since the Lisa was introduced. Character streams are a pretty common thing today. They are also very useful thanks to their simplicity. Much like Unix, it's an example of the "worse is better" principle. It's simpler, dumber, and, because of that, its uses have evolved over decades with almost no change to the underlying plumbing required - the same tools that worked over serial lines, then multiplexed X.25 channels, then telnet, now work under SSH streams. Apps on both sides only need to know about the token stream.
One key aspect of the Unix way is that the stream is of bytes (often interpreted as characters) with little to no hint as to what's inside it. This way, tools like `grep` and `awk` can be generic and work on anything while others such as `jq` can specialize and work only on a specific data format, and can do more sophisticated manipulation because of that.
Fish shell does this too
We've had them for a long time. There have been multiple graphics standards terminals supported - Tektronix, ReGIS, Sixels, up to richer, less successful interfaces (such as AT&T's Blit and its successors - all gorgeous, all failed in the marketplace).
The notebook interface popularized by iPython is an interesting one, but it's not really a replacement for a terminal.
but why would they? what problems are they solving by being able to paste text into your web browsers address bar? or load a pdf into an LLM? or some other incredibly specific-to-you ability youve added?
if simply adding a lisp interpreter to a program is enough to impress people, why not add it to something other than 1970s terminal text editor? surely an LLM plus lisp can do more of these inane tricks than a 70s text editor plus lisp?
An article called "A Spreadsheet and a Debugger walk into a Shell" [0] by Bjorn (letoram) is a good showcase of an alternative to cells in a Jupyter notebook (Excel like cells!). Another alternative a bit more similar to Jupyter that also runs on Arcan is Pipeworld.
[0] https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger... [1] https://arcan-fe.com/2021/04/12/introducing-pipeworld/
PS: I hang out at Arcan's Discord Server, you are welcome to join https://discord.com/invite/sdNzrgXMn7
programmatic text manipulation
Rid us of the text-only terminal baggage that we deal with today. Even graphics are encoded as text, sent to the terminal, then decoded and dealt with.
Plan9 had the terminal right. It wasn't really a terminal, it was just a window which had a text prompt by default. It could run (and display!) graphical applications just as easily as textual applications.
If you want a terminal of the future, stop embracing terminals of the past.
The terminal of plan9 was just a window. By default you got a shell with a textual prompt, but you can launch any graphical application in there or any textual application. you can launch a 2nd window manager with its own windows. you can run doom. you can `ls` and `ssh` all you like. it all just works.
this debuted in Plan9 in 1995 or so. 30 years ago we had the terminal of the future and the entire world ignored it for some reason. I'm still a bit mad about it.
You're saying this with derision, but the ability to quickly add "incredibly specific-to-you" features is precisely what is so cool about it!
That's still text. Even PowerShell passes objects between commands.
Plan9 did this correctly. A terminal was just a window which could run graphical applications or textual applications. Locally or remotely. It all worked. You create a window, you get a shell with a text prompt. You can do text stuff all day long. But maybe you want that window to be a file manager, now? Launch vdir, and now that same window is home to a graphical file browser. close that and remote into another plan9 machine. launch doom. it runs. it all just works, and it all works smoothly.
And the entire source code for that OS could fit into one person's brain.
It is a very simple OS, appears (to my layman eye) to have sandboxing between all applications by default (via per-process namespaces) making it very easy to keep one application off of your network while allowing others to talk via network as much as they want, for example.
It ticks some of the boxes, but tonnes of work would be needed to turn it into a full alternative.
It’s like powershell but not ugly and not Microsoft.
When using tools that can emit 0 to millions of lines of output, performance seems like table-stakes for a professional tool.
I'm happy to see people experiment with the form, but to be fit for purpose I suspect the features a shell or terminal can support should work backwards from benchmarks and human testing to understand how much headroom they have on the kind of hardware they'd like to support and which features fit inside it.
Any solution has to address this use case first, IMO. There are some design constraints here, like:
- I don't care about video game levels of graphics - I generally want things to feel local, as opposed to say some cloud GUI - byte stream model: probably bad? But how would I do better?
as just a few examples I thought of in 10 seconds; there's probably way more.
I've thought about the author's exact complaints for months, as an avid tmux/neovim user, but the ability to interact with system primitives on a machine that I own and understand is important.
But hey, those statements are design constraints too - modern machines are tied somewhat to unix, but not really. Sysadmin stuff? Got standardized into things like systemd, so maybe it's a bit easier.
So it's not just a cynical mess of "everything is shit, so let's stick to terminals!" but I'd like to see more of actually considering the underlying systems you are operating on, fundamentally, rather than immidiately jumping to sort of, "how do we design the best terminal" (effectively UI)? The actual workflow of being a systems plumber happens to be aided very well by tmux and vim :)
(And to be fair, I only make this critique because I had this vague feeling for a while about this design space, but couldn't formalize it until I read this article).
readline in bash and zle in zsh both default to the standard emacs bindings so you're covered there.
The emacs bindings also work in every Cocoa NSTextField on macOS.
As far as having to go and download and configure all of those, 1., you don't need to do any of that, and you certainly wouldn't need to do it all at the same time. Configuring one of those a month when you come across needing one, and you find something in the default config you don't like, is definitely doable. 2. Once you do figure out your configs, they end up in your init.el. emacs is preinstalled on macOS and a quick $pkgmanager installed emacs away on Linux. Beyond that you can ship your entire setup just by downloading your emacs.d directory or init.el.
The same goes for basically any text editor, modern or not.
As you know, Emacs is more of a super environment that’s personally customised to a single individual. It wouldn’t make sense to hand over a fitted suit to someone else who is twice your size of you and then say “put it on, it looks good on me”.
For me, it's about making a repeated workflow efficient. Sure, I could alt+tab over to my PDF viewer, figure out the range of pages I want, then switch to my terminal window, run qpdf with the right arguments to split the PDF into chunks, alt+tab over to my web browser, log into Google's AI studio, mouse over add context to the LLM, navigate a file-open dialog to find my PDF, paste in my OCR prompt, have Gemini spit out my text, press download, navigate another file-open dialog, and then open the resulting file in my editor of choice.
Instead I can open my PDF, press a few keys, and have the whole process done for me without having to think too much about it and get back to wondering if this damn verb should be in the continual/habitual, completed, or more than completed past.
> why not add it to something other than 1970s terminal text editor?
We're responding to an article entitled "The terminal of the future", and even the GUI version of Emacs is still very much rooted in the paradigm of the terminal but with some very nice improvements. I'm arguing that much of the future this article pines for is already here.
VT500 should include ReGIS