I checked this with my R6RS implementation and it works just as you would expect (https://github.com/maplant/scheme-rs)
That being said, Steel is excellent and I highly recommend it if you just need R5RS with syntax transformers
For added fun, the day he teaches it in class, he wears a t-shirt from Y-combinator the startup accelerator (and explains what its name means).
Now that we've gotten that out of the way, it remains unclear what is surprising or unpleasantly surprising about the code.
Is Racket a good language to pick up and re-learn my concepts + implement some tools? Or are there some other languages that would be better to both brush up and learn the syntax of, I do not want to fight the syntax but rather express functions as seamlessly as I can.
https://cs.brown.edu/~sk/Publications/Papers/Published/fffkb...
That might help you decide whether Racket will help you with what you're trying to brush up on.
I did give your document a read and my (naive) understanding is you basically create DSLs for each sub-part of the problem you're trying to solve?
>A LOP-based software system consists of multiple, cooperating components, each written in domain-specific languages.
and
>cooperating multi-lingual components must respect the invariants that each participating language establishes.
So basically you're enforcing rules/checks at the language level rather than compile time?
How would you recommend a complete novice attain this sort of state of mind/thought process while working in this language? Because my thoughts go simply to creating types and enforcing type-checking coupled with pure functions to avoid successful-fail at runtime programs.
Also how would one navigate the complexity of multiple abstractions while debugging?
The paper also mentions a web-server language (footnote 27), if I use racket will I be productive "out of the box" or is the recommended path to take is writing a web server language first.
Thank you again for taking the time to respond, and please do forgive me for these naive questions.
But also, if you're processing non-linear data, you're going to want to do with a recursive function anyway. E.g., when dealing with a tree. Code below; can't seem to get multi-line code-formatting so it looks hideous:
#lang racket
(require "anon-rec.rkt") (require rackunit)
(struct mt ()) (struct node (v l r))
(define sum-tree (lam/anon (t) (cond [(mt? t) 0] [(node? t) (+ (node-v t) ($MyInvocation (node-l t)) ($MyInvocation (node-r t)))])))
(define t (node 5 (node 3 (mt) (mt)) (node 7 (node 9 (mt) (mt)) (mt))))
(check-equal? (sum-tree t) 24)
Yes, what you're describing is the "extreme" version of LOP. Of course you don't have to do it that aggressively to get working code.
Two references I like to point to:
https://www.hashcollision.org/brainfudge/
They will give you a sense of how one uses LOP productively.
You do not need to write a "web server language"! To the contrary, the Web server provides several languages to give you a trade-off between ease and power in writing server-side Web applications. So you can just write regular Racket code and serve it through the server. The server also comes with some really neat, powerful primitives (orthogonal to LOP) — like `send/suspend` — that make it much easier to write server-based code.
Even if I don't go fully into it as a production language, hopefully it'll open some avenues of thought that I do not yet possess.
Thank you for taking the time to respond, have a great day!
(scroll down, after the concept is explained using Clojure)
A bit crazier, in Go with generics: https://eli.thegreenplace.net/2022/the-y-combinator-in-go-wi...
(Just me suggesting other alternatives right now)
Then you need to retain the personnel who give you that capability. Because they are rare, in a field in which 99%+ of developers only glue together NPM or PyPI packages. (And many use Web search or, now, Claude Code to do the glue part.)
If I founded a startup doing mostly Web-like server backend work, I'd consider doing it in Racket or another Scheme, and then using that as a carrot to be able to hire some of the most capable programmers. (And not having to bother with resume spamming noise from hardly any of the 99%+ developers, who will be pounding the most popular resume tech stack keywords instead, because their primary/sole goal is employability.)
((fn [xs ret]
(if (empty? xs)
ret
(recur (rest xs)
(+ ret (first xs)))))
(range 5) 0)
=> 10
nb. Clojure doesn't have automatic tail call optimisation. We need to explicitly emulate it with`recur`.The joke can go on forever...
Recur has zero inconvenience. It's four letters, it verifies that you are in a tail position, and it's portable if you take code to a new function or rename a function. What's not to love?
It focuses exclusively on FP and does not deviate from it.
(define-syntax rec
(syntax-rules ()
((rec (NAME . VARIABLES) . BODY)
(letrec ( (NAME (lambda VARIABLES . BODY)) ) NAME))
((rec NAME EXPRESSION)
(letrec ( (NAME EXPRESSION) ) NAME))))
[0] https://srfi.schemers.org/srfi-31/srfi-31.html2. The README literally says "Don't Use This Macro!" and references `rec` to use instead:
https://github.com/shriram/anonymous-recursive-function?tab=...
They're niche because they're doing weird, interesting things. Like creating their own VMs to support funky features. So nobody wants to depend on them: low bus-factor.
They can do weird, interesting things because they don't have a large user-base that will yell at them about how they're breaking prod.
https://github.com/shriram/anonymous-recursive-function/comm...
Moreover, you can design cooperating macros that induce and take advantage of tail-position calls.
Here's a simple example that motivates tail-calls that are not tail-recursive:
https://cs.brown.edu/~sk/Publications/Papers/Published/sk-au...
https://news.ycombinator.com/item?id=45154253
would therefore not work.
> "nb. Clojure doesn't have automatic tail call optimisation. We need to explicitly emulate it with`recur`."
Just an average joe programmer here... advanced macrology is way above my pay grade :sweat-smile:.recur: https://clojuredocs.org/clojure.core/recur
> Evaluates the exprs in order, then, in parallel, rebinds the bindings of
the recursion point to the values of the exprs. (def factorial
(fn [n]
(loop [cnt n
acc 1]
(if (zero? cnt)
acc
(recur (dec cnt) (* acc cnt))
; in loop cnt will take the value (dec cnt)
; and acc will take the value (* acc cnt)
))))
trampoline: https://clojuredocs.org/clojure.core/trampoline > trampoline can be used to convert algorithms requiring mutual recursion without stack consumption.
i.e. these emulate TCO, with similar stack consumption properties (they don't implement real TCO).(edit: formatting)
https://dl.acm.org/doi/pdf/10.1145/317636.317779
Usually the trampoline is implemented automatically by the language rather than forcing the author to confront it, though I can see why Clojure might have chosen to put the burden on the user.
More important is the debugability. If you have a normal data structure you can see the full stack of values. If you use recursion you have to unwind through multiple call frames and look at each one individually.
Recursion is for people who want to show a neat clever trick, it isn't the best way to program.
https://clojure.org/about/history
> Clojure is not the product of traditional research
> and (as may be evident) writing a paper for this setting
> was a different and challenging exercise.
> I hope the paper provides some insight into why
> Clojure is the way it is and the process and people
> behind its creation and development.
Recursive DFS took 4 ms Iterative DFS took 8 ms
I'm sure you could optimize the explicit stack based one a bit more to reach parity with a significantly more complex program.
But might as well let ~75 years of hardware, OS, and compiler advancements do that for you when possible.
> Why would copying a single value be slower than pushing an entire call frame to the stack
Because that's not what happens. The stack arithmetic is handled in hardware increasing IPC significantly, and the 'frame' you are talking about it almost the same same size as a single value in the happy path when all the relevant optimizations work out.
> More important is the debugability
Debugging recursive programs is pretty neat with most debuggers. No, you don't unwind through anything manually, just generate a backtrace.