←back to thread

284 points borski | 2 comments | | HN request time: 0.002s | source
Show context
ltbarcly3 ◴[] No.44685504[source]
When I read this I just feel like Sussman is getting out of touch and maybe a little disillusioned. From the wording he seems annoyed about the change, isn't even sure why it was made "probably because there's some library for robots", has gone from someone who would have a complete understanding of complex systems to someone who sounds more like a frustrated beginner. "Doing basic science on libraries to see how they behave" instead of grabbing the source and looking at it - he is expressing the view of someone who wants to complete their ticket in Jira without too much effort and go home for the night rather than someone with an actual curiosity and enjoyment of what they are doing.
replies(1): >>44695836 #
monkeyelite ◴[] No.44695836[source]
Can you not see that there is a philosophical difference between these two approaches and why someone might have a preference for one or the other?
replies(2): >>44696809 #>>44698139 #
ltbarcly3 ◴[] No.44696809[source]
There isn't a philosophical difference. Both are problem solving strategies. You would choose one or the other depending on goals and constraints. There isn't a 'philosophical' reason to choose one or the other.

In the end the choice wasn't made for philosophical reasons. It was made because the robot library was in Python, or so he thinks. I think this actually shows there was a strong reason to never use Scheme: it's a failed language family. Outside of academic life support and very niche uses (which are usually directly caused by the academic uses, such as Emacs lisp) scheme just doesn't exist out in the world, despite a dozen very competent and complete implementations that are still supported. This is not an invitation for lisp fan boys to stomp their feet, TIOBE doesn't have any (()())()(()) language in the top 20. Debate rankings all you want, but lisp type languages are extremely rarely used because people don't find them productive when given a choice.

replies(2): >>44698841 #>>44718773 #
so-cal-schemer ◴[] No.44718773{3}[source]
Thank goodness! Can you imagine a world where just anybody could leap tall buildings?

https://www.paulgraham.com/rootsoflisp.html

https://www.paulgraham.com/diff.html

https://www.paulgraham.com/icad.html

replies(2): >>44751906 #>>44757250 #
so-cal-schemer ◴[] No.44751906{4}[source]
I don't remember who said it, but the Metacircular Evaluator is the real superpower in Lisp/Scheme — even beyond writing macros. It allows you to modify the language itself in ways that are unthinkable in other languages.

Whether that is a good thing depends on you.

replies(3): >>44752878 #>>44753908 #>>44767233 #
1. kazinator ◴[] No.44752878{5}[source]
I would say that it's lesser than macros because macros let you write a "metacircular compiler".

You can maintain performance across multiple nestings of compiled languages.

Where as if we write a metacirular interpreter in a compiled Lisp, we now have something lesser than the host language: an interpreted dialect. And then, if we write another meticircular interpreter in that dialect, we have something even slower: and interpreter interpreting the code of an interpreter which interprets code. With each level of embedding, we lose orders of magnitude of performance.

Metacircular interpreters are good for showing something cool: that you can document a possible model of how a language works using nothing but a small amount of code in that language. Someone who has learned how to use the language can then read that small amount of code understand it and acquire that model easily. Hopefully with the understanding that's not the only model. (Like for instance that lexical variables don't have to be an association list extended by consting.)

And of course academics have studied metacircular interpreters, and their theoretical properties. In that way they're also legitimate objects of interest.

replies(1): >>44811350 #
2. so-cal-schemer ◴[] No.44811350[source]
Fascinating!

I've seen references to Dan Friedman's work on this in the '80s. It looks very powerful, but he's said it made his head hurt. I bet there will be fruitful research layering interpreters like this with AI code generation.

Also, Nada Amin and Tiark Rompf had something to say about this:

https://dl.acm.org/doi/10.1145/3158140

"12 CONCLUSIONS

We have shown how to collapse towers of interpreters using a stage-polymorphic multi-level λ-calculus λ↑↓. We have also shown that we can re-create a similar effect using LMS and polytypic programming via type classes. We have discussed several examples including novel reflective programs in Purple / Black. Looking beyond this paper, we believe that collapsing towers, in particular eterogeneous towers, has practical value. Here are some examples:

(1) It is often desirable to run other languages on closed platforms, e.g., in a web browser. For this purpose, Emscripten [Zakai 2011] translates LLVM code to JavaScript. Similarly, Java VMs [Vilk and Berger 2014] and even entire x86 processor emulators [Hemmer 2017] that are able to boot Linux [Bellard 2017] have been written in JavaScript. It would be great if we could run all such artifacts at full speed, e.g., a Python application executed by an x86 runtime, emulated in a JavaScript VM. Naturally, this requires not only collapsing of static calls, but also adapting to a dynamically changing environment."

(2) It can be desirable to execute code under modified semantics. Key use cases here are: (a) instrumentation/tracing for debugging, potentially with time-travel and replay facilities, (b) sand-boxing for security, (c) virtualization of lower-level resources as in environments like Docker, and (d) transactional execution with atomicity, isolation, and potential rollback.

(3) Non-standard interpretations, e.g., program analysis, verification, synthesis. We would like to reuse those artifacts if they are implemented for the base language. For example, a Racket interpreter in miniKanren [Byrd et al. 2017] has been shown to enable logic programming for a large class of Racket programs without translating them to a relational representation. Other examples are the Abstracting Abstract Machines (AAM) framework [Horn and Might 2011], which has recently been extended to abstract definitional interpreters [Darais et al . 2017]. For these indirect approaches to be effective, it is important to remove intermediate interpretive abstractions which would otherwise confuse the analysis.

For these use cases, our approach hints at a solution where we only need to manually lift the meta interpreter of the user level while the rest of the tower acts in a kind of pass-through mode, handing down staging commands to the lowest level, which needs to support stage polymorphism. Last but not least, it is important to note that the present work is based on interpreters derived from variations of the λ-calculus, and thus leaves a gap towards collapsing heterogeneous towers of truly independent languages. This gap is especially prominent in a setting where a language level does not follow the usual functional or imperative paradigm, e.g., if a logic programming language or a probabilistic programming language is part of the tower. Thus, we hope that our work spurs further activity in implementing stage polymorphic virtual machines and collapsing towers of interpreters in the wild."

https://www.codemesh.io/codemesh2017/nada-amin

Nada Amin - Collapsing Towers of Interpreters - Code Mesh 2017

https://www.youtube.com/watch?v=Ywy_eSzCLi8

Tiark Rompf - [POPL'18] Collapsing Towers of Interpreters

https://www.youtube.com/watch?v=QLyBxXqml5Y