Works great on Apple Silicon
Once you've got that working, try installing a 2.11BSD distribution. It's well-documented and came after a lot of the churn in early Unix. After that, I've had great fun playing with RT-11, to the point that I've actually written some small apps on it.
first time I see people use 'ed' for work!!!
I wonder who else has to deal with ed also... recently I had to connect to an ancient system where vi was not available, I had to write my own editor, so whoever needs an editor for an ancient system, ping me (it is not too fancy).
amazing work by the creators of this software and by the researchers, you have my full respect guys. those are the real engineers!
> It's somewhat picky about the environment. So far, aap's PDP-11/20 emulator (https://github.com/aap/pdp11) is the only one capable of booting the kernel. SIMH and Ersatz-11 both hang before reaching the login prompt. This makes installation from the s1/s2 tapes difficult, as aap's emulator does not support the TC11. The intended installation process involves booting from s1 and restoring files from s2.
E.g. since the MAME project considers itself living documentation of arcade hardware, it would be more properly classified as a simulator. While the goal of most other video game emulators is just to play the games.
In practice the terms are often conflated.
echo 'int main(void) { printf("hello!\n"); }' > hello.c
...EXCEPT...It's not, because the shell redirection operators didn't exist yet at this point in time. Maybe (or maybe not?) it would work to cat to the file from stdin and send a Ctrl-D down the line to close the descriptor. But even that might have been present yet. Unix didn't really "look like Unix" until v7, which introduced the Bourne shell and most of the shell environment we know today.
The feedback from the editor, however, is… challenging.
What I found entertaining was when he was explaining how to compile the kernel, I went Oh! that's where openbsd gets it from. it is still a very similar process.
Earlier, I wrote an editor for card images stored on disks. Very primitive.
it hasn't
definitely agree on simulator though!
One hour long, and Thompson tells a lot of entertaining stories. Kernighan does a good job of just letting Thompson speak.
And since then never used it ever again, nor ed when a couple of years later we had Xenix access, as vi was much saner alternative.
But, for a whole bunch of reasons, I’m left with the suspicion you may be misremembering something from the early 1970s as happening in the 1960s. While it isn’t totally impossible you had this experience in 1968 or 1969, a 1970s date would be much more historically probable
I thought it was early versions of the Rust compiler, but I can't seem to find any references to it. Maybe it was Go?
EDIT: Found it: 'rust-lang/rust#13871: "hello world" contains Lovecraft quotes' https://github.com/rust-lang/rust/issues/13871
https://hackaday.com/2017/01/03/make-logic-gates-out-of-almo...
Like the culture produced and consumed on social media and many other manifestations of Internet culture it is perfectly ephemeral and disposable. No history, no future.
SaaS is not just closed but often effectively tied to a literal single installation. It could be archived and booted up elsewhere but this would be a much larger undertaking, especially years later without the original team, than booting 1972 Unix on a modern PC in an emulator. That had manuals and was designed to be installed and run in more than one deployment. SaaS is a plate of slop that can only be deployed by its authors, not necessarily by design but because there are no evolutionary pressures pushing it to be anything else. It's also often tangled up with other SaaS that it uses internally. You'd have to archive and restore the entire state of the cloud, as if it's one global computer running proprietary software being edited in place.
> ...and considers all succeeding lines to be the message text. It is terminated by a line containing only a period, upon which a 250 completion reply is returned.
But in 01980 Unix had only been released outside of Bell Labs for five years and was only starting to support ARPANET connections (using NCP), so I wouldn't expect it to be very influential on ARPANET protocol design yet. I believe both Sluizer and Postel were using TOPS-20; the next year the two of them wrote RFC 786 about an interface used under TOPS-20 at ISI (Postel's institution, not sure if Sluizer was also there) between MTP and NIMAIL.
For some context, RFC 765, the June 01980 version of FTP, extensively discusses the TOPS-20 file structure, mentions NLS in passing, and mentions no other operating systems in that section at all. In another section, it discusses how different hardware typically handles ASCII:
> For example, NVT-ASCII has different data storage representations in different systems. PDP-10's generally store NVT-ASCII as five 7-bit ASCII characters, left-justified in a 36-bit word. 360's store NVT-ASCII as 8-bit EBCDIC codes. Multics stores NVT-ASCII as four 9-bit characters in a 36-bit word. It may be desirable to convert characters into the standard NVT-ASCII representation when transmitting text between dissimilar systems.
Note the complete absence of either of the hardware platforms Unix could run on in this list!
(Technically Multics is software, not hardware, but it only ever ran on a single hardware platform, which was built for it.)
RFC 771, Cerf and Postel's "mail transition plan", admits, "In the following, the discussion will be hoplessly [sic] TOPS20[sic]-oriented. We appologize [sic] to users of other systems, but we feel it is better to discuss examples we know than to attempt to be abstract." RFC 773, Cerf's comments on the mail service transition plan, likewise mentions TOPS-20 but not Unix. RFC 775, from December 01980, is about Unix, and in particular, adding hierarchical directory support to FTP:
> BBN has installed and maintains the software of several DEC PDP-11s running the Unix operating system. Since Unix has a tree-like directory structure, in which directories are as easy to manipulate as ordinary files, we have found it convenient to expand the FTP servers on these machines to include commands which deal with the creation of directories. Since there are other hosts on the ARPA net which have tree-like directories, including Tops-20 and Multics, we have tried to make these commands as general as possible.
RFC 776 (January 01981) has the email addresses of everyone who was a contact person for an Internet Assigned Number, such as JHaverty@BBN-Unix, Hornig@MIT-Multics, and Mathis@SRI-KL (a KL-10 which I think was running TOPS-20). I think four of the hosts mentioned are Unix machines.
So, there was certainly contact between the Unix world and the internet world at that point, but the internet world was almost entirely non-Unix, and so tended to follow other cultural conventions. That's why, to this day, commands in SMTP and header lines in HTTP/1.1 are terminated by CRLF and not LF; why FTP and SMTP commands are all four letters long and case-insensitive; and why reply codes are three-digit hierarchical identifiers.
So I suspect the convention of terminating input with "." on a line of its own got into ed(1) and SMTP from a common ancestor.
I think Sluizer is still alive. (I suspect I met her around 01993, though I don't remember any details.) Maybe we could ask her.
Side note: that ~1 MIP 3B2 could support about 20 simultaneous users…
Did they treat this as a 9-5 effort, or did they go into a “goblin mode” just to get it done while neglecting other aspects of their lives?
https://gitlab.com/segaloco/v1man/-/blob/master/man1/stat.1?...
for sdrwrw:
- column 1 is s or l meaning small or large
- column 2 is d, x, u, -; meaning directory, executable, setuid, or nothing.
- the rest are read-write bits for owner and non-owner.
If you want to play around with RT-11 again, I made a small PDP-11/03 emulator + VT240 terminal emulator running in the browser. It's still incomplete, but you can play around with it here: https://lsi-11.unknown-tech.eu/ (source code: https://github.com/unknown-technologies/weblsi-11)
The PDP-11/03 emulator itself is good enough that it can run the RT-11 installer to create the disk image you see in the browser version. The VT240 emulator is good enough that the standalone Linux version can be used as terminal emulator for daily work. Once I have time, I plan to make a proper blog post describing how it all works / what the challenges were and post it as Show HN eventually.
I suspect the sentiment is more that it would be nice to live in a simpler time, with fewer options, because it would reduce anxiety we all feel about not being able to "keep up" with everything that is going on. Or maybe I'm just projecting.
On the other hand, I never really tried to do anything with TECO other than run VTEDIT.
The alternative is to use a decent VT emulator attached to roughly any monitor. By "decent" I certainly don't mean projects like cool-retro-term, but rather something like this, which I started to develop some time ago and which I'm using as my main terminal emulator now: https://github.com/unknown-technologies/vt240
There is a basic and totally incomplete version of a VT240 in MAME though, which is good enough to test certain behavior, but it completely lacks the graphics part, so you can't use it to check graphics behavior like DRCS and so on.
EDIT: I also know for sure that there is a firmware emulation of the VT102 available somewhere.
[0] it was an IBM PC clone, an ISA bus 386SX, made by TPG - TPG are now one of Australia’s leading ISPs, but in the late 1980s were a PC clone manufacturer. It had a 40Mb hard disk, two 5.25 inch floppy drives (one 1.2Mb, the other 360Kb), and a vacant slot for a 3.5 inch floppy, we didn’t actually install the floppy in it until later. I still have it, but some of the innards were replaced, I think the motherboard currently in it is a 486 or Pentium
Intepreter either writes it in bytecode and then executes the bytecode line by line ?
Atleast that is what I believe the difference is , care to elaborate , is there some hidden joke of compiler vs intepreter that I don't know about ?
It just feels that one is emulator if its philosophy is "it just works" and simulator if "well sit down kids I am going to give you proper documentation and how it was built back in my days"
but I wonder what that means for programs themselves...
I wonder if simulator==emulator is more truer than what javascript true conditions allow.
Calling the same thing a different name.
The goals merely overlap, which is obvious. Equally obviously, if two goals are similar, then the implimentations of some way to attain those goals may equally have some overlap, maybe even a lot of overlap. And yet the goals are different, and it is useful to have words that express aspects of things that aren't apparent from merely the final object.
A decorative brick and a structural brick may both be the same physical brick, yet if the goals are different then any similarity in the implimentation is just a coincidense. It would not be true to say that the definition of a decorative brick includes the materials and manufacturing steps and final physical properties of a structural brick. The definition of a decorative brick is to create a certain appearance, by any means you want, and it just so happens that maybe the simplest way to make a wall that looks like a brick wall, is to build an actual brick wall.
If only they had tried to make it clear that there is overlap and the definitions are grey and fuzzy and open to personal philosophic interpretation and the one thing can often look and smell and taste almost the same as the other thing, if only they had said anything at all about that, it might have headed off such a pointless confusion...
Every programmer that has a project in mind should try this: Put away 3 weeks of focus time in a cabin, away from work and family, gather every book or document you need and cut off the Internet. Use a dumb phone if you can live with it. See how far you can go. Just make sure it is something that you already put a lot of thoughts and a bit of code into it.
After thinking more thoroughly about the idea, I believe low level projects that rely on as few external libraries as possible are the best ones to try the idea out. If your project relies on piles of 3rd party libraries, you are stuck if you have an issue but without the Internet to help you figure it out. Ken picked the right project too.
For compilers, constant folding is a pretty obvious optimization. Instead of compiling constant expressions, like 1+2, to code that evaluates those expressions, the compiler can already evaluate it itself and just produce the final result, in this case 3.
Then, some language features require compilers to perform some interpretation, either explicitly like C++'s constexpr, or implicitly, like type checking.
Likewise, interpreters can do some compilation. You already mentioned bytecode. Producing the bytecode is a form of compilation. Incidentally, you can skip the bytecode and interpret a program by, for example, walking its abstract syntax tree.
Also, compilers don't necessarily create binaries that are immediately runnable. Java's compiler, for example, produces JVM bytecode, which requires a JVM to be run. And TypeScript's compiler outputs JavaScript.
cd /sys/arch/$(machine)/conf
cp GENERIC CUSTOM
vi CUSTOM # make your changes
config CUSTOM
cd ../compile/CUSTOM
make
https://www.openbsd.org/faq/faq5.htmlI have never done it for 2bsd but according to http://www.vaxman.de/publications/bsd211_inst.pdf
cd /usr/src/sys/conf
cp GENERIC CUSTOM
vi CUSTOM
./config CUSTOM
cd /sys/CUSTOM
make
I think this is key. If you already have the architecture worked out in your head, then it's just smashing away at they keyboard. Once you have a 3rd party library, you can spend most of your time fighting with and learning about that.
- Put away a few weeks and go into Hermit mode;
- Plan ahead what projects they have in mind, which books/documents to bring with them. Do enough research and a bit of experimental coding beforehand;
- Reduce distraction to minimum. No Internet. Dumb phone only. Bring a Garmin GPS if needed. No calls from family members;
I wouldn't be surprised if they could up-level skills and complete a tough project in three weeks. Surely they won't write a UNIX or Git, but a demanding project is feasible with researches allocated before they went into Hermit mode.
And it shows.
I am joking of course, git is pretty great, well half-joking, what is it about linux that it attracts such terrible interfaces. git vs hg, iptables vs pf. there is a lot of technical excellence present, marred by a substandard interface.
If you're willing to let everything crash if you stray from the happy path you can be remarkably productive. Likewise if you make your code work on one machine, on a text interface, with no other requirements except to deliver the exact things you need.
It's a bit like any early industry, from cars to airplanes to trains. Earlier models were made by a select few people, and there was several versions until today where GM and Ford have thousands of people involved in designing a single car iteration.
I don't know what the difference is , I know there can be intepreters of compilers but generally speaking it's hard to find compilers of intepreters
Eg C++ has compilers , intepreters both (cpi) , gcc
Js doesn't have compilers IIRC , it can have transpilers Js2c is good one but i am not sure if they are failsafe (70% ready) ,
I also have to thank you , this is a great comment
What I have in mind are embedded projects -- you are probably going to use it even when you are the only user. So that fixes the motivation issue. You probably have a clean cut objective so that clicks the other checkbox. You need to bring a dev board, a bunch of breadboards and electronics components to the cabin, but that doesn't take a lot of spaces. You need the specifications of the dev board and of the components used in the project, but those are just pdf files anyway. You need some C best practices? There must be a pdf for that. You can do a bit of experimental coding before you leave for the cabin, to make sure the idea is solid, feasible and the toolchain works. The preparations give you a wired up breadboard and maybe a few hundred lines of C code. That's all you need to complete the project in 3 weeks.
Game programming, modding and mapping come into my mind, too. They are fun, clean cut and well defined. The thing is you might need the Internet to check documents or algorithms from time to time. But it is a lot better to cut off Internet completely. I think they fit if you are well into them already -- and then you boost them up working 3 weeks in a cabin.
There must be other lower level projects that fit the bill. I'm NOT even a good, ordinary programmer, so the choices are few.
A compiler processes the code and provides an intermediate result which is then "interpreted" by the machine.
So to take the "writes it in byte code" -- that is a compiler. "executes the byte code" -- is the interpreter.
If byte code is "machine code" or not, is really secondary.
The same tool can often be used to do both. trival example: a web browser. save your web page as a pdf? compiler. otherwise interpreter. but what if the code it is executing is not artisanal handcrafted js but the result of a typescript compiler?
As for the comparison with the JVM .. compare to a compiler that produces x86 code, it cannot be run without an x86 machine. You need a machine to run something, be it virtual or not.
A compiler takes the same thing, but produces an intermediate form (byte code, machine code, another languages sometimes called "transpilar"). That you can then pass through an interpreter of sorts.
There is no difference between Java and JVM, and Python and the Python Virtual Machine, or even a C compiler targeting x86 and a x86 CPU. One might call some byte code, and the other machine code .. they do the same thing.
* The first axis is static vs dynamic types. Java is mostly statically-typed (though casting remains common and generics have some awkward spots); Python is entirely dynamically-typed at runtime (external static type-checkers do not affect this).
* The second axis is AOT vs JIT. Java has two phases - a trivial AOT bytecode compilation, then an incredibly advanced non-cached runtime native JIT (as opposed to the shitty tracing JIT that dynamically-typed languages have to settle for); Python traditionally has an automatically-cached barely-AOT bytecode compiler but nothing else (it has been making steps toward runtime JIT stuff, but poor decisions elsewhere limit the effectiveness).
* The third axis is indirect vs inlined objects. Java and Python both force all objects to be indirect, though they differ in terms of primitives. Java has been trying to add support for value types for decades, but the implementation is badly designed; this is one place where C# is a clear winner. Java can sometimes inline stack-local objects though.
* The fourth axis is deterministic memory management vs garbage collection. Java and Python both have GC, though in practice Python is semi-deterministic, and the language has a somewhat easier way to make it more deterministic (`with`, though it is subject to unfixable race conditions)
I have collected a bunch more information about language implementation theory: https://gist.github.com/o11c/6b08643335388bbab0228db763f9921...
It is a tiny distinction, but generally I'd say that a simulator tries to accurately replicate what happens on an electrical level as good one can do.
While an emulator just does things as a black box ... input produces the expected output using whatever.
You could compare it to that an accurate simulator of a 74181 tries to do it by using AND/OR/NOT/... logic, but an emulator does it using "normal code".
In HDL you have a similar situation between structural, behavioral design ... structural is generally based on much more lower level logic (eg., AND/NOR/.. gates ...), and behavioral on higher logic (addition, subtraction ...).
"100%" accuracy can be achieved with both methods.
While some ideas like hierarchical filesystems were new it was mainly a modernized version of CTSS according to Dennis Ritchie's paper "The UNIX Time-sharing SystemA Retrospective"
I was playing with this version on simh way too late last night, taking a break from ITS, and being very familiar with v7 2.11 etc.. It is quite clearly very cut down.
I think being written in Assembly, which they produced by copying the DEC PAL-11R helped a lot.
If you look through the v1 here:
https://www.tuhs.org/Archive/Distributions/Research/Dennis_v...
It is already very modular, and obviously helped by dmr's MIT work:
https://people.csail.mit.edu/meyer/meyer-ritchie.pdf
But yet...work for years making an ultra complex OS that intended to provide 'utility scale' compute, and writing a fairly simple OS for a tiny mini would be much easier....if not so for us mortals.
It isn't like they just came out of a code boot camp...they needed the tacit knowledge and experience to push out 100K+ lines in one year from two people over 300bps terminals etc...
My Excel skills completely blow, and I hate Microsoft with a passion, but I created a shared spreadsheet one long Saturday afternoon that had more functionality than our $80K annual ERP system. Showed it to a few more open-minded employees, then moved it to my server, never to be shown again. Just wanted to prove when I said the ERP system was pointless, that I was right.
And yes, in August 01972 probably nobody at MIT had ever used ed(1) at Bell Labs. Not impossible, but unlikely; in June, Ritchie had written, "[T]he number of UNIX installations has grown to 10, with more expected." But nothing about it had been published outside Bell Labs.
The rationale is interesting:
> The 'MLFL' command for network mail, though a useful and essential addition to the FTP command repertoire, does not allow TIP users to send mail conveniently without using third hosts. It would be more convenient for TIP users to send mail over the TELNET connection instead of the data connection as provided by the 'MLFL' command.
So that's why they added the MAIL command to FTP, later moved to MTP and then in SMTP split into MAIL, RCPT, and DATA, which still retains the terminating "CRLF.CRLF".
https://gunkies.org/wiki/Terminal_Interface_Processor explains:
> A Terminal Interface Processor (TIP, for short) was a customized IMP variant added to the ARPANET not too long after it was initially deployed. In addition to all the usual IMP functionality (including connection of host computers to the ARPANET), they also provided groups of serial lines to which could be attached terminals, which allowed users at the terminals access to the hosts attached to the ARPANET.
> They were built on Honeywell 316 minicomputers, a later and un-ruggedized variant of the Honeywell 516 minicomputers used in the original IMPs. They used the TELNET protocol, running on top of NCP.