←back to thread

366 points nils-m-holm | 1 comments | | HN request time: 0.214s | source
Show context
nils-m-holm ◴[] No.45037420[source]
Second edition, with a new chapter on lambda calculus.
replies(1): >>45067854 #
gritzko ◴[] No.45067854[source]
Thanks. I recently had to reinvent LISP to script my CRDT database. That was not much work, because I already had the notation (I use RDX, a JSON superset with CRDT types). Still, I stumbled at the idiosyncratic LISP bracketing. Luckily, RDX allows for different tuple notations. So, I styled it to look less alien to a curly-braced developer. Like this https://github.com/gritzko/go-rdx/blob/main/test/13-getput.j...

For example, print change-dir make-dir; is equivalent to (print (change-dir (make-dir) ) ) in the old money. I wonder if I am reinventing too much here.

Did LISPers try to get rid of the brackets in the past?

replies(6): >>45068004 #>>45068097 #>>45068164 #>>45068705 #>>45069136 #>>45069145 #
drob518 ◴[] No.45068097[source]
There have been many attempts to get rid of sexprs in favor of a “better” syntax. Even John McCarthy, the inventor (discoverer?) of Lisp had plans for an “M-expression” syntax to replace “S-expressions.” It never happened. The secret is that Lispers actually view sexprs as an advantage, not something to be worked around. Once you discover symbolic editing and code manipulation based on sexprs, you’ll never go back to weak line editing. That said, some Lisp dialects (e.g. Clojure and Racket) have embraced other symbols like square and curly brackets to keep the code more terse overall and optically break up longer runs of parentheses.

Probably the best example of a “Lisp without parentheses” is Dylan. Originally, Dylan was developed as a more traditional Lisp with sexprs, but they came up with a non-sexr “surface syntax” before launching it to avoid scaring the public.

replies(1): >>45069840 #
xedrac ◴[] No.45069840[source]
I actually really appreciate Racket's judicious use of square brackets in let expressions. It just makes visual parsing that much easier.
replies(1): >>45070659 #
drob518 ◴[] No.45070659[source]
Exactly. I also like Clojure’s use of square brackets for vectors and curly braces for maps. It eliminates all the “vector-” and “map-” function calls.
replies(1): >>45071784 #
xedrac ◴[] No.45071784[source]
Those are big quality of life improvements. I wish the other lisps would follow suit. I suppose I could just implement them myself with some macros, but having it standard would be sweet.
replies(1): >>45073694 #
tmtvl ◴[] No.45073694[source]
The Revised Revised Revised Revised Revised Revised Report on the Algorithmic Programming Language Scheme (R6RS) specified that square brackets should be completely interchangeable with round brackets, which allows you to write let bindings or cond clauses like so:

  (let ([a (get-some-foo 1)]
        [b (get-some-foo 2)])
    (cond [(> a b) -1]
          [(< a b) 1]
          [else 0]))
...but I hate that, I'd much prefer if square brackets were only used for vectors, which is why I have reader macros for square brackets -> vectors and curly brackets -> hash tables in my SBCL run commands.
replies(1): >>45074850 #
drob518 ◴[] No.45074850[source]
I think the R6S behavior helps with visual matching, but squanders using square brackets for something more useful (e.g. vectors), which is a shame. Another thing Clojure does is copy Arc in eliminating parentheses around the pairs of forms in let bindings and cond forms, which really aren’t needed. It just expects pairs of forms and the compiler objects if given an odd number. The programmer can use whitespace (notably newlines) to format the code so the pairings are visibly apparent. That reduces a surprising amount of needless parentheses because let binding forms are used all over (less so cond forms).
replies(1): >>45077174 #
tmtvl ◴[] No.45077174[source]
Doing structural editing with unbracketed let bindings is pretty awful, though. And cond clauses being bracketed helps when needing multiple forms:

  (cond ((printable? foo)
         (print foo)
         (newline)
         foo)
        (else
         (print-members foo)
         newline))
True, with modern machine-generated mass-operations refactoring is easier than with older tools, but that doesn't mean a given set of brackets is 'useless'.
replies(2): >>45078589 #>>45084397 #
1. kragen ◴[] No.45078589[source]
I think it's a matter of whether you're programming in a mostly applicative way† or in a more imperative way. Especially in the modern age of generational GC, Lisp cons lists support applicative programming with efficient applicative update, but sacrifice efficiency for certain common operations: indexing to a numerical position in a large list, appending to a list, or doing a lookup in a finite map such as an alist. So, in Common Lisp or Scheme, we are often induced to use vectors or hash tables, sacrificing applicative purity for efficiency—thus Perlis's quip about how purely applicative languages are poorly applicable, from https://www.cs.yale.edu/homes/perlis-alan/quotes.html.

In general a sequence of expressions of which only the value of the last is used, like C's comma operator or the "implicit progn" of conventional cond and let bodies, is only useful for imperative programming where the non-last expressions are executed for their side effects.

Clojure's HAMTs can support a wider range of operations efficiently, so Clojure code, in my limited experience, tends to be more purely applicative than code in most other Lisps.

Incidentally, a purely applicative finite map data structure I recently learned about (in December 02023) is the "hash trie" of Chris Wellons and NRK: https://nullprogram.com/blog/2023/09/30/. It is definitely less efficient than a hash table, but, in my tests so far, it's still about 100ns per hash lookup on my MicroPC and 250ns on my cellphone, compared to maybe 50ns or 100ns respectively for an imperative hash table without FP-persistence. It uses about twice as much space. This should make it a usable replacement for hash tables in many applications where either FP-persistence, probabilistically bounded insertion time, or lock-free concurrent access is required.

This "hash trie" is unrelated to Knuth's 01986 "hash trie" https://www.cs.tufts.edu/~nr/cs257/archive/don-knuth/pearls-..., and I think it's a greatly simplified HAMT, but I don't yet understand HAMTs well enough to be sure. Unlike HAMTs, it can also support in-place mutating access (and in fact my performance measurements above were using it).

______

† sometimes called "functional", though that can alternatively refer to programming with higher-order functions