On Common Lisp too, by defining defalias as a macro:
https://stackoverflow.com/questions/24252539/defining-aliase...
On Common Lisp too, by defining defalias as a macro:
https://stackoverflow.com/questions/24252539/defining-aliase...
That’s changed, of course, but it remained true for at least another 15 or 20 years after this article was written and then changed rather quickly, perhaps cemented with deep neural networks and GPUs.
Other than running the emacs ecosystem, what else is Lisp being used for commonly these days?
Algol? The kernel of Algol seems as natural as a branch of mathematics? Can anyone who has used Algol give their opinion of this statement?
God I miss old Scientific American. Today's SA isn't especially terrible, but old SA, like old BYTE, was reliably enlightening.
I know a lot of people classify "Clojure" and "Lisp" in different categories, but I'm not 100% sure why.
[1] Usual disclaimer: It's not hard to find my job history, I don't hide it, but I politely ask that you don't post it here.
I hate that people are convinced LISP == functional programming, writ large. Not that I dislike functional programming, but the symbolic nature of it is far more interesting to me. And it amuses me to no end that I can easily make a section of code that is driven by (go tag) sections, such that I can get GOTO programming in it very easily.
I was happy with the section in Wireframe magazines that would show how to code some game mechanics every issue. Would love for more stuff like that.
It mostly boils down to Clojure not having CONS cells. I feel like this distinction is arbitrary because the interesting aspect of Lisps is not the fact that linked-lists are the core data-structure (linked-lists mostly suck on modern hardware), but rather that the code itself is a tree of lists that enables the code to be homoiconic.
> what else is Lisp being used for commonly these days?
Anything that runs on Clojure - Cisco has their cybersec platform and tooling running on it; Walmart their receipt system; Apple - their payments (or something, not sure); Nubank's entire business runs on it; CircleCI; Embraer - I know uses Clojure for pipelines, not sure about CL, in general Common Lisp I think still quite used for aircraft design and CAD modeling; Grammarly - use both Common Lisp and Clojure; Many startups use Clojure and Clojurescript.
Fennel - Clojure-like language that compiles to Lua can handle anything Lua-based - people build games, use it to configure their Hammerspoon, AwesomeWM, MPV, Wez terminal and things-alike, even Neovim - it's almost weird how we're circling back - decades of arguing Emacs vs. Vim, and now getting Vim to embrace Lisp.
I don't know about the work's true impact on AI or tech languages, but it's a masterpiece of criticism, analysis and penmanship.
I remember it as a likeable, economical, expressive language, without significant warts, and which had clearly been influential by being ahead of its time.
So my guess is that Hofstadter was just referring to its practical elegance - rather than the more theoretical elegance of Lisp.
It is being used for formal verification in the semiconductor industry by companies like AMD, Arm, Intel, and IBM: https://www.cs.utexas.edu/~moore/acl2/
The syntax was a bigger problem than Lisp's syntax, though.
It's not easy to produce a language with a syntax that's good as daily use syntax, but is also not unwieldy as an AST. Lisp is one of the few relatively successful examples.
Yann LeCun developed Lush, which is a Lisp for neural networks, during the early days of deep architectures. See https://yann.lecun.com/ex/downloads/index.html and https://lush.sourceforge.net. Things moved to Python after a brief period when Lua was also a serious contender. LeCun is not pleased with Python. I can't find his comments now, but he thinks Python is not an ideal solution. Hard to argue with that, as its mostly a thin wrapper over C/C++/FORTRAN that poses an obvious two-language problem.
Yeah. XML and S expressions are pretty close to functionally equivalent. But once you've seen S expressions, XML is disgustingly clumsy.
In addition to REPL and macros, I think two other Lispy features are essential:
nil is not just the sad path poison value that makes everything explode: lisp is written so that optionals compose well.
Speaking of composing, Lisps tend to be amazing with regard to composability. This is another line that cuts between CL, Scheme and Clojure on one side, with Python and Javascript firmly on the other side in my experience.
Lisps are as dynamic a languages ever go, unapologetically.
- Gödel, Escher, Bach: an Eternal Golden Braid (you have GEB/EGB, and I guarantee you he noticed those notes form a musical triad)
- Metamagical Themas (anagram of Mathematical Games)
- Le Ton beau de Marot (I don't have my copy at hand, but "ton beau" is surely a pun on "tombeau" meaning "tomb")
- The Mind's I (editor) (I = eye)
- That Mad Ache (translation of "La chamade" by Francoise Sagan; "mad ache" is an anagram of "chamade")
QVM, a Quantum Virtual Machine https://github.com/quil-lang/qvm
Quilc, an "advanced optimizing compiler" for Quil https://github.com/quil-lang/quilc
Coalton, "a statically typed functional programming language built with Common Lisp." https://coalton-lang.github.io/20211010-introducing-coalton/
Interestingly, this is no longer the case. Modern Lisps now evaluate (car nil) and (cdr nil) to nil. In the original Lisp defined by John McCarthy, indeed CAR and CDR were undefined for NIL. Quoting from <https://dl.acm.org/doi/pdf/10.1145/367177.367199>:
> Here NIL is an atomic symbol used to terminate lists.
> car [x] is defined if and only if x is not atomic.
> cdr [x] is also defined when x is not atomic.
However, both Common Lisp and Emacs Lisp define (car nil) and (cdr nil) to be nil. Quoting from <https://www.lispworks.com/documentation/HyperSpec/Body/f_car...>:
> If x is a cons, car returns the car of that cons. If x is nil, car returns nil.
> If x is a cons, cdr returns the cdr of that cons. If x is nil, cdr returns nil.
Also, quoting from <https://www.gnu.org/software/emacs/manual/html_node/elisp/Li...>:
> Function: car cons-cell ... As a special case, if cons-cell is nil, this function returns nil. Therefore, any list is a valid argument. An error is signaled if the argument is not a cons cell or nil.
> Function: cdr cons-cell ... As a special case, if cons-cell is nil, this function returns nil; therefore, any list is a valid argument. An error is signaled if the argument is not a cons cell or nil.
I would guess it's by far the most active Guile project.
https://www.grammarly.com/blog/engineering/running-lisp-in-p...
At UC Berkeley, however, the first computer science class was taught in Scheme (a dialect of Lisp)...and it absolutely blew me away. Hofstadter is right: it feels the closest to math (reminding me a ton of my math theory classes). It was the first beautiful language I discovered.
(edit: I forgot to paste in the quote I loved!)
"...Lisp and Algol, are built around a kernel that seems as natural as a branch of mathematics. The kernel of Lisp has a crystalline purity that not only appeals to the esthetic sense, but also makes Lisp a far more flexible language than most others."
present companies (that we know about): https://github.com/azzamsa/awesome-lisp-companies/
$ telnet its.pdp10.se 10003
Trying 88.99.191.74...
Connected to pdp10.se.
Escape character is '^]'.
Connected to the KA-10 simulator MTY device, line 0
^Z
TT ITS.1652. DDT.1548.
TTY 21
3. Lusers, Fair Share = 99%
Welcome to ITS!
For brief information, type ?
For a list of colon commands, type :? and press Enter.
For the full info system, type :INFO and Enter.
Happy hacking!
:LOGIN SUSAM
TT: SUSAM; SUSAM MAIL - NON-EXISTENT DIRECTORY
:LISP
LISP 2156
Alloc? n
*
(status lispversion)
/2156
(car nil)
NIL
(cdr nil)
NIL
^Z
50107) XCT 11 :LOGOUT
TT ITS 1652 Console 21 Free. 19:55:07
^]
telnet> ^D Connection closed.
$
I dunno, there's Nyxt, Google Flights, MediKanren, there's some German HPC guys doing stuff with SBCL, Kandria,... I believe there's also a HFT guy using Lisp who's here on HN. LispWorks and Franz are also still trucking, so they prolly have clientele.
There are fewer great big FLOSS Lisp projects than C or Rust, but that doesn't really tell the whole story. Unfortunately proprietary and internal projects are less visible.
I'd find it a cleverer bit of wordplay if "le ton beau de ..." itself didn't feel clumsy. Surely it would always be "le beau ton de ..."?
Apparently it didn’t make the transition to 64-bit machines well? But I haven’t really looked.
Clojure in general is far better suited for manipulating data than anything else (in my personal experience). It is so lovely to send a request, get some data, and then interactively go through that data - sorting, grouping, dicing, slicing, partitioning, tranforming, etc.
The other way around is also true - for when you need to generate a massive amount of randomized data.
XML and HTML are attributed text, while S-expressions are more like a homogeneous tree
If you have more text than metadata, then they are more natural than S-expressions
e.g. The closing </p> may seem redundant, until you have big paragraphs of free form text, which you generally don't in programs
[1] http://johnj.com/posts/oodles/
edit: clarification
To make this comment more actionable, my understanding of Python's homoiconic functionality comes down to these methods, more-or-less:
1. Functions that apply other functions to iterables, e.g. filter(), map(), and reduce(). AKA the bread-n-butter of modern day JavaScript.
2. Functions that wrap a group of functions and routes calls accordingly, e.g. @singledispatch.
3. Functions that provide more general control flow or performance conveniences for other functions, e.g. @cache and and partial().
3. Functions that arbitrarily wrap other functions, namely wraps().
Certainly not every language has all these defined in a standard library, but none of them seem that challenging to implement by hand when necessary -- in other words, they basically come down to conviences for calling functions in weird ways. Certainly none of these live up to the glorious descriptions of homoiconic languages in essays like this one, where "self-introspection" is treated as a first class concern.
What would a programmer in 2024 get from LISP that isn't implemented above?
It's a shame things took the course they did with preferred languages.
> (defalias 'plus #'+)
> (defalias 'quotient #'/)
> (defalias 'times #'*)*
> (defalias 'difference #'-)*
Looks like we also need a defmacro for def that is used much further in the article:
> > (def rac (lambda (lyst) (car (reverse lyst))))
I mean the above example fails in Emacs:
ELISP> (def rac (lambda (lyst) (car (reverse lyst))))
*** Eval error *** Symbol’s function definition is void: def
If we want the above example to work, we need to define def like this: ELISP> (defmacro def (name lambda-def) `(defalias ',name ,lambda-def))
def
Now the previous example, as presented in the article, works fine: ELISP> (def rac (lambda (lyst) (car (reverse lyst))))
rac
ELISP> (rac '(your brains))
brains
Sixty years later, most Lisp programs are still full of operations on conses. A more accurate name for the language would be "Cons Processor!" It's a reminder that Lisp was born in an era when a language and its implementation had to fit hand in glove. I think that makes the achievement of grounding a computer language in mathematical logic all the more remarkable.
https://taeric.github.io/CodeAsData.html
The key for me really is in the signature for "eval." In python, as an example, eval takes in a string. So, to work with the expression, it has to fully parse it with all of the danger that takes in. For lisp, eval takes in a form. Still dangerous to evaluate random code, mind. But you can walk the code without evaluating it.
In Scheme my code is littered with
(if (null? lst)
;; handle empty case here
...)
Simply because otherwise car throws an error. This whole section is often unnecessary in CL. > (cond ((eq (oval pi) pie) (oval (snot pie pi)))
(t (eval (snoc (rac pi) pi))))
I realised after a few seconds that they are meant to be "eval" and "snoc" instead. The above code should be written as the following instead: (cond ((eq (eval pi) pie)
(eval (snoc pie pi)))
(t (eval (snoc (rac pi) pi))))
This article has been a fascinating read, by the way. Kudos to the maintainer of the Gist post. I am also sharing these corrections as comments on the Gist post.EDIT #1: Downloaded a copy of the original Scientific American article from https://www.jstor.org/stable/24968822 and confirmed that indeed the functions "oval" and "snot" are misspellings of "eval" and "snoc".
EDIT #2: Fixed typo in this comment highlighted by @fuzztester below.
Apparently the behaviour of the CAR and CDR of NIL being NIL was from Interlisp, and it wasn't until the designers of Maclisp and Interlisp met to exchange ideas that they decided to adopt that behaviour (it was also ostensibly one of the very few things they actually ended up agreeing on). The reason they chose it was because they figured operations like CADR and such would be more correct if they simply returned NIL if that part of the list didn't exist rather than returning an error, otherwise you had to check each cons of the list every time. (If somebody can find the source for this, please link it!)
CL-USER> (symbolp nil)
T
CL-USER> (atom nil)
T
CL-USER> (listp nil)
T
Similar results in Emacs Lisp. But in MIT Scheme, we get: 1 ]=> nil
;Unbound variable: nil
Of course, we can use () or (define nil ()) to illustrate your point. For example: 1 ]=> (car ())
;The object (), passed as the first argument to car, is not the correct type.
But when I said NIL earlier, I really meant the symbol NIL that evaluates to NIL and is both a LIST and ATOM. But otherwise, yes, I understand your point and agree with it.But I'd love to try! Maybe I'll take an online class for fun.
[0]: https://www.goodreads.com/book/show/181239.Metamagical_Thema...
(the book's title is the article series, which originated as an anagram of the article series that Martin Gardner authored, "Mathematical Games," also published in Scientific American and which Hofstadter then took over)
It _is_ an elegant and minimal expression of a style of programming that is ubiquitous among dynamically-typed, garbage-collected languages. And it's a "theory" in the sense that it seems complete, and that you can think of ways to solve problems into Scheme and translate that into other dynamically-typed languages and still end with an elegant solution. Emphasis on the elegant (since minimal, wart-free, consistent and orthogonal, etc.).
Scheme was a simplification and a "cleaning up" compared to conventional Lisps of the time (lexical scoping, single shared namespace for functions and variables etc.)
https://hexdocs.pm/elixir/macros.html
It is certainly possible to implement this sort of thing in other languages, I think, depending on the compilation or preprocessing setup
Correction of your correction:
confirmed that indeed the functions "oval" and "snot" are misspellings of "eval" and "snoc".
And I guess snoc is cons reversed and rac is car reversed.
You're a troll, but I'll feed you. I adapted Peter Norvig's excellent lispy2.py [0] to read json. I call it JLisp [1].
Lispy2 is a scheme implementation, complete with macros that executes on top of python. I made it read json, really just replacing () with []. and defining symbols as {'symbol': 'symbol_name'}. I built it because it's easier to get a webapp to emit JSON then paren lisp. I also knew that building an interpreter on top of lisp meant that I wouldn't back myself into a corner. There is incredible power in the lisp, especially the ability to transform code.
[0] https://norvig.com/lispy2.html
[1] https://github.com/paddymul/buckaroo/blob/main/tests/unit/li... #tests for JLisp
https://en.m.wikipedia.org/wiki/CAR_and_CDR
In any case, ASTute observation
er, ASTute ;)
How so? If car of nil returns nil, then how does a caller distinguish between a value of nil and a container/list containing nil?
The only way is they can check to see if it's a cons pair or not? So if you have to check if it's a cons pair then you're doing the same thing as in scheme right?
I may be missing something, but isn't it effectively the same amount of work just potentially? Need to check for nil and need to check if it's a pair?
Thanks! Fixed.
> And I guess snoc is cons reversed and rac is car reversed.
Indeed! That's exactly how those functions are introduced in the article. Quoting from the article:
> The functions rdc and snoc are analogous to cdr and cons, only backwards.
The LISP (elisp?) syntax itself gives me a headache to parse so I think I'll stay away for now, but I'll definitely be thinking about how to build similar functionality into my high level application code -- self modification is naturally a big part of any decent AGI project. At the risk of speaking the obvious, the last sentence was what drove it home for me:
It is not just some opaque string that gets to enjoy all of the benefits of your language. It is a first class list of elements that you can inspect and have fun with.
I'm already working with LLM-centric "grammars" representing sets of standpoint-specific functions ("pipelines"), but so far I've only been thinking about how to construct, modify, and employ them. Intelligently composing them feels like quite an interesting rabbit hole... Especially since they mostly consist of prose in minimally-symbolic wrappers, which are probably a lot easier for an engineer to mentally model--human or otherwise. Reminds me of the words of wonderful diehard LISP-a-holic Marvin Minsky: The future work of mind design will not be much like what we do today. ...what we know as programming will change its character entirely-to an activity that I envision to be more like sculpturing.
To program today, we must describe things very carefully because nowhere is there any margin for error. But once we have modules that know how to learn, we won’t have to specify nearly so much-and we’ll program on a grander scale, relying on learning to fill in details.
In other words: What if the problem with Lisp this whole time really was the parentheses? ;)source is Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy: https://onlinelibrary.wiley.com/doi/full/10.1609/aimag.v12i2...
And algebraic data types make it possible to make your code conform to reality in ways that classes can't. Once you're exposed to them, it's very much like learning about addition after having been able to multiply for your whole life. (In fact that's more than a metaphor -- it's what's happening, in a category theoretic sense.)
Haskell has other cool stuff too -- lenses, effect systems, recursion schemes, searching for functions based on their type signatures, really it's a very long list -- but I think laziness, purity and ADTs are the ones that really changed my brain for the better.
At least Hofstadter was successful at getting me interested in math beyond high school.
This article, like every other Lisp article, tells pre-teen me nothing that he could use. Nobody ever demonstrated how much easier task X is in Lisp over asm/C/Pascal/etc.
By contrast, current me could have told pre-teen me "Hey, that spell checker that took you 7 months to write in assembly? Yeah, it's damn near trivial in Lisp on a microcomputer with bank switched memory that nobody every knew how to utilize (it makes garbage collection completely deterministic even on a woefully underpowered CPU). Watch."
I want to weep over the time I wasted doing programming with the equivalent of tweezers, rice grains and glue because every Lisp article and textbook repeated the same worn out lists, recursion and AI crap without ever demonstrating how to do anything useful.
I keep meaning to expand on the idea. I keep not doing so. I have higher hopes that I can get back to the rubik's cube code. Even there, I have a hard time getting going.
Your comment is great though, consider me convinced. I've done a bit of messing with Lisp, but really would like to try write something in Haskell, or slog through a book or two, some day.
When I first got a Byte magazine as a pre-teen, one of the articles was Lisp code for symbolic differentiation and algebraic simplification. I really couldn't follow it but felt there was something intriguing there. Certainly it wouldn't have been easier in Basic.
(Byte September 1981, AI theme issue. Later I was able to tell the code was not so hot...)
I didn't really get into Lisp until the late 80s with XLisp on a PC, and SICP. Worth the wait!
At that point, if you're making the two calls how is LISP's behavior any more ergonomic than Scheme. I'm not saying it's not possible, I just don't see it.
Can you show code between the two and how one is much worse than the other?
(if lst
...)
if the empty list is falsy, but Scheme eventually chose to add #t and #f. Oddly #f is the only false value but #t is not the only true value.I did have a copy of "LISP: A Gentle Introduction to Symbolic Computation" by Touretzky in 1986. It wasn't really that much better than any of the articles. It never explained why using Lisp would be so much easier than anything else even for simple programming tasks.
Had some of the Lisp hackers deigned to do stuff on the piddly little micros and write it up, things would look a whole lot different today.
Maybe there was a magazine somewhere doing cool stuff with Lisp on micros in the 1980-1988 time frame, but I never found it.
No. It has an empty list, which is a singleton atomic value whose type is not shared with any other object, and it has a boolean false value, which is distinct from the empty list. A user can create a symbol named NIL, but that symbol has no characteristics that distinguish it from any other symbol. You can, of course, bind NIL to either the empty list or boolean false (or any other value) but it can only have one value at a time (per thread).
I mean... I guess you could think of it as having its own set of self-consistent axioms, and from them you can build things. It's a lot larger set of axioms than most branches of mathematics, though.
I guess, if Hofstadter meant the same level of naturalness, well, yes, C did feel pretty natural to me, so... maybe?
(and (car x)
(cadr x)
(cadar x)
(cadadr x))
You would have to write this every time you want to see if there's a really CADADR, whereas if CAR and CDR can return NIL then you can just write (cadadr x) and CADADR can still be defined as (car (cdr (car (cdr x)))) and have the desired behaviour.Algol 60 was the first language with lexical scope, while Algol 68 was a kitchen-sink language that (positively) influenced Python and (negatively) influenced Pascal.
In general, we can say that the Lisp language is very good at manipulating the same data types that the syntax of Lisp programs is made from. This makes it very easy to write Lisp programs that swallow up Lisp programs as raw syntax, analyze Lisp programs syntactically, and/or spit out new Lisp programs as raw syntax.
(compress (reverse (explode 'ABC)))
;COMPRESS UNDEFINED FUNCTION OBJECT
(implode (reverse (explode 'ABC)))
CBA
The point being that I never learn any fancy string-processing commands. I just implement explode and compress.FYI, that means the opposite of how you used it.
"Never fails to disappoint" is an idiom that means a person or thing consistently disappoints.
"One of the most important and fascinating of all computer languages is LISP (standing for "List Processing"), which was invented by John McCarthy around the time Algol was invented. Subsequently, LISP has enjoyed great popularity with workers in Artificial Intelligence."
https://readable.sourceforge.io/
I looked into porting it to elisp a while back, but the elisp reader was missing a feature or two sweet-expressions require. I should see if that's still true...
Bit by bit, more people got used to doing data analysis and AI research in Python. Some projects were even written for Python first (e.g. Tensorflow or Keras). Eventually, Python had so many high-quality packages that it became the de facto for modern AI.
Is it the _best_ language for AI, though? I doubt. However, it is good enough for most use cases.
I found this bit extra amusing:
>It would be nice as well as useful if we could create an inverse operation to readers-digest-condensed-version called rejoyce that, given any two words, would create a novel beginning and ending with them, respectively - and such that James Joyce would have written it (had he thought of it). Thus execution of the Lisp statement (rejoyce 'Stately 'Yes) would result in the Lisp genie generating from scratch the entire novel Ulysses. Writing this function is left as an exercise for the reader.
It took a while, but we got there. I don't think 2024's AI is quite what he had in mind in 1983, but you have to admit that reproducing text given a little seeding is a task that quite suits the AI of today.
> February, 1983
> IN previous columns I have written quite often about the field of artificial intelligence - the search for ways to program computers so that they might come to behave with flexibility, common sense, insight, creativity, self awareness, humor, and so on.
This is very amusing to me because it reads like a list of things LLMs truly stink at. Though at least they finally represent some nonzero amount of movement in that direction.
https://en.m.wikipedia.org/wiki/Teach_Yourself_Scheme_in_Fix...
Also:
XML: eXtremely Murky Language
or Mindblowing
Long time ago...
The book I found most useful in the early times as an introduction to Lisp and programming with it was LISP from Winston & Horn. The first edition was from 1981 and the second edition from 1984. I especially liked the third edition.
https://en.wikipedia.org/wiki/Lisp_(book)
Lisp on microcomputers in the early 80s was mostly not useful - that was my impression. I saw a Lisp for the Apple II, but that was very barebones. Next was Cambridge Lisp (a version of Standard Lisp) on the Atari ST. That was more complete but programming with it was a pain. Still, I found the idea of a very dynamic&expressive programming language and its interactive development style very interesting. The first useful implementations on smaller computers I saw were MacScheme and Coral Lisp, both for the Macintosh, Mid 80s...
There were articles about Lisp in the Byte magazine early on, but having access to the software mentioned was difficult.
The early use cases one heard of were: computer science education, functional programming, generally experimenting with new ideas of writing software, natural language processing, symbolic mathematics, ... This was nothing which would be more attractive to a wider audience. David Betz Xlisp later made Lisp more accessible. Which was then used in AutoCAD as an extension language: AutoLisp.
Luckily I had starting mid 80s access at the university to the incoming stream of new research reports and there were reports about various Lisp related projects, theses, etc.
I am yet to find a syntax style more ergonomic than s-expressions. Once you appreciate the power of structural code editing your view of s-expressions is likely to change
To unpack the explanation, because I was wondering how the very negative statement could be misinterpreted:
"Never" is a negative; "fails" is a negative; in English, two negatives cancel out.
"Never fails to disappoint" means "always disappoints".
For some of us, we can just about handle the simple algebraic infix stuff, and we'll never make that leap to "my god, it's full of CARs".
I just bounced off it, and I have tried quite hard, repeatedly.
Idea: for the rest of us who can't simply flip syntax around in our heads, there should be an infix Lisp that tries to preserve some of the power without the weird syntaxless syntax.
There are of course several, of which maybe the longest-lived is Dylan:
https://en.wikipedia.org/wiki/Dylan_(programming_language)
... but instead of Dylan's Algol- or Pascal-like syntax, do a Dylan 2 with C-style syntax?
I felt the same way, a lot of people feel that way.
This is in part because FP is difficult, typed FP is difficult, and Haskell is difficult. All by themselves. They do get easier once you intuit more and more FP in general I'd say.
Then there's also a phenomena described in the Haskell Pyramid[0] where it sometimes appears more difficult than it really is.
Like a lot of things, actually building something gets you a long way, esp. with the advent of chat AIs as it's comparatively easy to go back an fourth and learn little by little.
Doesn't win any price, or content worth of "I rewrote X in Y" blogpost, but does the job.
If they really meant the opposite that "HackerNews never disappoints" why would that be "sad"?
But since they said "sadly" I think they really meant what they wrote which is that HN consistently disappoints them. Maybe they meant that the Lisp articles on HN do not go into describing exactly how "code as data" works in actual practical matters and that is consistently disappointing? And they are finally happy when someone explained "code as data" to them in the comments section?
This is categorically not the case.
Let me paraphrase my own post from Lobsters a year or two back:
I hypothesise that, genuinely, a large fraction of humanity simply lacks the mental flexibility to adapt to prefix or postfix notation.
Algebraic notation is, among ordinary people, almost a metonym for “complicated and hard to understand”. I suspect that most numerate people could not explain BODMAS precedence and don’t understand what subexpressions in brackets mean.
I have personally taught people to program who did not and could not understand the conceptual relationship between a fraction and a percentage. This abstraction was too hard for them.
Ordinary line-numbered BASIC is, I suspect, somewhere around the upper bound of cognitive complexity for billions of humans.
One reason for the success of languages with C syntax is that it’s the tersest form of algebraic notation that many people smart enough to program at all can handle.
Reorder the operators and you’ve just blown the minds of the majority of your target audience. Game over.
I admire Lisp hugely, but I am not a Lisp proponent.
I find it fascinating and the claims about it intrigue me, but to me, personally, I find it almost totally unreadable.
Those people I am talking about? I say this became I am one.
I myself am very firmly in the camp of those for whom simple algebraic infix notation is all I can follow. Personally, my favourite programming language is still BASIC.
The key is "writing and maintaining" Lisp software.
Lisp often won't get learned by reading or writing ABOUT it, but by reading AND writing actual Lisp code.
It's a bit like riding a bike. You can study bikes for a long time, but you will typically not be able to ride a bike. That's something which can be learned when actually practicing to ride the bike. This means also not needing to consciously think about it, but by moving tasks to internal automatisms. Lisp code is data and this wants to be "manipulated". This manipulation is a key to learn Lisp. The other key element is to work with a system which gives live feedback -> interactive programming. "Interactive" means to do things, to fail, to improve, to do it again.
It's in part the experience of actually using an interactive programming system.
Ruby (not a lisp but bear with me) started to do this more correctly IMHO where a nil would start throwing errors if you tried to do things with it BUT it would still be equivalent to false in boolean checks.
I think programming languages are also long overdue for some controlled trials. They can't be blinded: any experimental subject bright enough to learn to program is probably going to know what language they are programming in.
But trials comparing the effectiveness of different languages. How long they take to attain a specified level of proficiency, how long to learn enough to produce working code, and importantly, readability: for instance, how long it takes to find intentionally-planted bugs in existing, unfamiliar code.
NeXT did this, way back in the 1980s, and Sun lost badly:
https://www.youtube.com/watch?v=UGhfB-NICzg
There is a writeup of some of it here:
http://www.kevra.org/TheBestOfNext/BooksArticlesWhitePapers/...
But speaking as a non-video-liker, this 17min one is worth it.
As stated, I think this design choice is terrible, especially if nil isn't equivalent to false in boolean comparisons (as it is in Ruby and Elixir- with Elixir actually providing two types of boolean operators with slightly different but significant behavior; "and" will only take pure booleans while "&&" will equate nil with false). It might mean cleaner-written code upfront but it's going to result in massively-harder-to-debug code because the actual error (a mishandled nil result) might only create a visible problem many stack levels away in some completely different part of the code.
You may be right there, but I think there is a point you are smoothing over and almost trying to hide here.
What if someone can't get to the point where they are able to write useful code?
If you can't start riding a bike without stabilisers or someone holding it, then you're never going to learn to ride well.
At around age 11 or 12 I tried to learn to roller-skate. My parents bought me skates, and I put them on and tried to stand up.
I fell over so much over a few days that I bruised my pelvis and walking became very painful, let alone lying down. It was horrible and I gave up.
25 years later I managed to learn to ride a snowboard, after years of failure, because of having to do an emergency turn to avoid hitting some children and getting up on one edge and learning that edge-riding is the key. Nobody told me, including 3 paid days of lessons with a professional teacher.
It took great persistence and physical pain but I did it. I gave up on skating of any kind.
My core point is that people vary widely in abilities. Some people pick up a complex motor skill in 10-15min and can do it and their skills grow from there. Others may struggle for days or weeks to attain that... And most are not doggedly determined enough to try for that long.
Algebra is most schoolchildren's way of describing "something extremely hard to learn and pointless in everyday life." For ordinary humans, the concepts of "variables" and "symbols" that manipulate them IS A WAY TO TALK ABOUT something super-difficult.
But most of it, with effort, can just about get through. Very very few choose to follow it further.
And yet, there are a few families of programming languages -- Lisp, Forth, Postscript, HP calculator RPN -- whose basic guiding assumption is "you will easily master this basic action, so let's throw it away and move on to the logic underneath".
And the people who like this family of languages are annoyed and offended that other languages that do not require this are hundreds of times more popular and are used by millions of people.
Worse still, when someone comes and says "hey, maybe we can simplify that bit for ordinary folks", they mock and deride the efforts.
Maybe just allow yourself to think: perhaps this stuff that's easy for me is hard for others, and I should not blame them for them finding it hard?
(defun explode (x)
(mapcar (lambda (x)
(intern (char-to-string x)))
(string-to-list (prin1 x))))
turning character into symbol seems natural, because then you are reducing your needed function space even more. I'm surprised the original operated on prin1 output, not sure what the logic behind that is. on a lisp machine (zl:explode "foo") gives me '(|"| |f| |o| |o| |"|) a/b = c/d, we know a, b and c, solve for d, which is cb / a.
As I said, every grandma did that to guess some percentages.
Thus, if anyone can grasp that, he/she is ready for Algebra.In German: "es ist noch nie ein Meister vom Himmel gefallen". We all start somewhere, we go to school, we have teachers, we have trainers/coaches, we have mentors, ...
I don't think studying it alone will help, best is with people around. Parents and friends will help us to learn how to ride a bike. They will give an example, they will give feedback on our attempts, they will propose what and how to try to master it. After the initial basic hurdle is done, then comes a lot of practice. But again, best by being embedded in a community. Learning such skills is a social activity.
There is a lot of pedagogical material to learn programming with Lisp, Logo, Scheme. I had courses about software development, using languages like PASCAL, LISP, Scheme and others. We got exercises and feedback. We got access to computers, cpu time and an environment for coding. I looked around and setup my own tools and wrote stuff with it. I discussed this stuff (code, environment, architecture, styles, ...) with a friend.
> perhaps this stuff that's easy for me is hard for others, and I should not blame them for them finding it hard?
Lot's of people are frightened by thinking/hearing that it is hard, while in fact it actually isn't.
For example one of reads that German is very difficult for native English speakers. There are a lot of justifications given for that. The actual data says something different. German is very near to English, English even is a Germanic language: https://en.wikipedia.org/wiki/Germanic_languages
The actual ranking: https://effectivelanguagelearning.com/language-guide/languag...
Trying to learn Lisp without actually trying to write code, sounds like trying to learn a language without actually trying to speak with people. Possible, but unnecessary hard.
We need to make our brain adapt to the new language by moving into an environment, where the words connect to the real world and thus to meaning.
Maybe just allow yourself to think: Giving feedback is not "blaming". That's an early concept needed for moving forward.
Just call it a Result::Failure monad, say you meant to do that, and confuse legions of programmers for decades.
I became a father 5 years ago, so strictly, my mother is a grandmother. I could ask her, but I am very confident she does not know.
I have a science degree, and I just barely scraped through a statistics 101 course with great difficulty. I am pretty smart; I speak 6 foreign languages, and I have held down a career in software for approaching 40 years now, by understanding hard stuff and making it work, or documenting it, or explaining it.
But I find algebra hard, just scraped through a mathematics 'O' level in 1986 by taking corrective classes and resitting the 1985 exam that I failed.
I stand by what I said.
I've never heard this rule. Looking at the Wolfram explanation, I could do that, yes. But I've never heard of this, and I am pretty confident my mother could not do this.
for example a useful datatype is an association list. (setq x ((a . 1) (b . 2) (c . nil))) you can query it by calling (assoc 'a x) which is going to give you back a cons cell (a . 1) in this case. now the presence or absence of this cell indicates the association. if you want to know explicitly that C is nil, then you have an option to, and it's similar in function call counts to Scheme. if you don't care though about the distinction you can do (cdr (assoc 'a x)) which is going to give you 1. doing (cdr (assoc 'foo x)) will give you nil without erroring out. it's a pretty common pattern.
in case of established data types like association list, you will probably have a library of useful functions already defined, like you can write your own getassoc function that hides the above. you can also return multiple values from getassoc the same way as gethash does the first value being the value, and the second value being whether or not there's a corresponding cons cell.
but when you define your own adhoc cons cell based structures, you don't have the benefit of predefined functions. so let's say you have an association list of symbols to cons cells (setq x ((a . (foo . 1)) (b . (bar . 2)) (c . nil))). if I want to get foo out of that list, I'll say (cadr (assoc x 'a)) which will return foo. doing (cadr (assoc x 'c)) or (cadr (assoc x 'missing)) will both return nil. these later manipulations require extensive scaffolding in Scheme.
Clojure is very well suited for data science of all shapes and sizes. There's a great meetup lead by Daniel Slutsky where they regularly discuss this topic, and there's #data-science channel in Clojurians Slack where they regularly post interesting findings. As for the libraries, anything used in Java/Javascript can be directly used. Besides, there is TMD, https://github.com/techascent/tech.ml.dataset - it's a well-regarded lib and provides solid functionality for data manipulation.
Describes the evolution from:
(cdr (assq key a-list))
to: (let ((val (assq key a-list)))
(cond ((not (null? val)) (cdr val))
(else nil)))
Let me try to demonstrate with a parallel example.
> "es ist noch nie ein Meister vom Himmel gefallen"
My best guess is: A master does not ready from heaven fall?
One does not instantly become a master?
Different people find different skills easy.
So: ich kann ein bisschen Deutsch spreche. Nicht so viel, und mein Deutsch is nicht gut; es is sehr, sehr schlecht. Aber fur meine Ferien es genug ist.
Ich hat drei Tage Deutsch gestudiert, unt es war in 1989. Drei tage, am ein bus vom Insel Man nach der Rhein.
I am fairly good with languages. I can communicate in 6 foreign languages. Currently, I am studying Czech, because my wife is Czech, and I would like to be able to speak with her family, some of whom speak no English, or German, French, Spanish or anything else I speak at all.
Czech is really hard. It makes German look like an easy beginner's language. In place of German's 4 cases, Czech has 7; in place of German's 3 genders, Czech has 4. (Czechs think there are 3, but really there are 4. Polish has 5.)
I am somewhere past A2 level Czech, beginning B1, and I can hold a simple conversation, mainly in the present tense. But I started at age 45 and it took me about 5 or 6 years of work to get to this level. Basic tourist German I got in about 30 or 40 hours of hard work when I was 20 years old.
I am not bad at languages.
I am terrible at mathematics and very poor at programming. I used to be capable and proficient in BASIC and fairly good in FORTRAN. I managed a simple RLE monochrome image compression and decompression program in C, and an implementation of Conway's Game of Life in Pascal, and that is the height of my achievement.
I am pretty good at getting other people's code working, though. Enough to be paid to do it for decades.
I find Python quite hard -- weird complicated stuff like objects comes in early, and nasty C syntax peeks through even simple stuff like printing numbers.
Lisp, though, switches from just about comprehensible code to line noise very quickly after the level of "Hello world".
I got hold of a copy of SICP. It's famous. It's meant to be really good.
I could not follow page 1 of the actual tutorial.
Perhaps you know it.
In section 1.1.1, it says:
« (+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an expression in the form
(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
following a formatting convention known as pretty-printing, in which each long combination is written so that the operands are aligned vertically. The resulting indentations display clearly the structure of the expression.6 »
The "helpful" pretty-printed part is incomprehensible to me. Section 1.1.1 is about where I gave up.
I think that this kind of issue is not just me.
Again: I submit that a bunch of people good at a very difficult skill are badly over estimating how good ordinary folks would be at it.
Most people can't program. Most people can't do mathematics. Most people are not good at this stuff.
The people that can do maths and can program mostly can only program in simple, infix-notation, imperative languages. Functional languages, or even prefix- and postfix-notation, is a step further than I suspect that 99% of humans can go.
And the attitude of those who can do it to those of us who can't do it is really not pleasant.
Perhaps the macro facilities are also convenient but that is not the part that makes Lisp mathematical, it's the higher order programming.
And it needn't even be something fancy, just being able to have a data table of tests and have the test functions generated and executed from the table is the power demonstrated.
No doubt about that.
SICP is the wrong book.
SICP is for people who are good at maths. Most of the examples are maths related. That's a well known complaint about the book. Often such maths-heavy introductory courses filter out the students who are not good at maths. On purpose.
SICP is not for beginners learning Lisp programming. SICP was an university introductory course book for computer science. It was developed out of maths-heavy CS lectures. Various other books tried to improve it both to make some of the topics easier to learn or to make it more advanced in programming language technology.
Easier SICP from Brian Harvey
https://www.youtube.com/watch?v=cuTOo_Kj4U0&list=PL91cR71aKp...
or him adopting this stuff to Logo: Computer Science Logo Style. https://people.eecs.berkeley.edu/~bh/
Or his book "Simply Scheme": https://people.eecs.berkeley.edu/~bh/ss-toc2.html
But what you are looking for is a book for a software developer wanting to learn practical Lisp programming with different examples.
Beware though, that there are today more than one flavor of Clojurescript, nbb for instance still acts just like JS in this regard.
He is? What is the distinguishment he is making?
This writing styling is....interesting.
While some atoms can be assigned values, the atom 1729 cannot be assigned any value other than the number 1729.
However, I do want to say something about listing out your qualifications and experience like you did on here... in the petty power struggles and trolling on the internet it does the exact opposite of what it seems like it should. It's putting the other person in charge of deciding if you are "good enough" to participate or have an opinion, by implicitly making an effort to convince them and asking them to judge you. Your opinion and reasoning carry more weight on their own, without arguing why you should have the right to have them.
This impression can be changed somehow by the fact that Haskell and its community has two faces: There is the friendly, "stuff-just-works" and "oh-nice-look-at-these-easy-to-understand-and-usefull-abstractions" pragmatic Haskell that uses the vanilla Language without many extensions, and being written by people that solve some real-world problem by programming.
Then there is the hardcore academic crowd - in my experience, very friendly, but heavily into mathematics, types and program language theory. They make use of the fact that Haskell is also a research language with many extensions that are someones PhD thesis. Which might also be the only documentation for that particular extension if you are unlucky. However, you can always ask - the community is rather on the side of oversharing information than the opposite.
Rust fills that gaping hole in my heart that Haskell opened a bit - not completely, but when it comes to $dayjob type of work, it feels somewhat similar (fight the compiler, but "when it compiles, it runs").
The last time I did ClojureScript in serious capacity was for a school project in 2021, specifically because I wanted to play with re-frame and the people who designed the project made the mistake of saying I could use "whatever language I want".
It makes sense, but I guess I didn't realize that ClojureScript generates some nice runtime wrappers to ensure correctness (or to at least minimize incorrectness).
I guess that means that if you need to do any kind of CPU-intensive stuff, ClojureScript will be a bit slower than TypeScript or JavaScript, right? In your example, you're adding an extra "if" statement to do the type check. Not that it's a good idea to use JS or TypeScript for anything CPU-heavy anyway...
However, thinking in terms of 'cognitive overhead' for a very minor design choice is very silly. I don't suffer any 'cognitive overhead' from having CAR and CDR work on NIL when I write Common Lisp because I'm used to it, but I do suffer 'cognitive overhead' when they don't in Scheme, which is the 'alternate world with a different design'. I am incredulous to the idea that one is actually superior to the other, and suppose that it is simply a matter of preference.
In rare cases, sure, it can add some overhead, and might not be suitable I dunno for game engines, etc., but in most use-cases it's absolutely negligible and brings enormous advantages otherwise.
Besides, there are some types of applications that simply really difficult to build with more "traditional" approach, watch this talk, I promise, it's some jaw-dropping stuff:
SpreadSheesh! talk by Dennis Heihoff https://www.youtube.com/watch?v=nEt06LLQaBY
but you may have misunderstood what I meant.
I wasn't criticizing you.
it was just a joke related to that scheme book.
I remember one particular issue about USA rivers which was really good, with great photos.
damn cool article.
the suwannee river was one that was covered.
https://en.m.wikipedia.org/wiki/Suwannee_River
I looked up that river in Wikipedia for the first time today.
TIL it is a blackwater river. first time I heard the term.
https://en.m.wikipedia.org/wiki/Blackwater_river
the NG issues used to come with very good maps as supplements, too, in color.
also there used to be nice color ads about good cameras, IIRC, like canon, minolta, etc, and cars like the cadillac, lincoln, etc.
gas guzzlers, of course.
a different time.
Outside of expressions, those languages are essentially prefix in that the operator comes before the list of arguments.
Whilst I'm vaguely familiar with Bourbaki and how it strongly influenced the way mathematics is written today, I hadn't come across that dichotomy before. Your answer was what I was looking for!
What I was trying to say was: "I am pretty smart, but I can't do this."
Which means: "different people are smart in different ways."
Which means: "what is no problem for Lisp coders can be a pretty big problem for other folks."
Polish has 3 genders (masculine, feminine and neuter), just like Czech. And 7 cases, like Czech.
In no particular order:
1. żeński
2. nijaki
3. męski męskożywotny
4. męski męskorzeczowy
5. męski męskoosobowy
Contrast with Czech:
1. žensky
2. středny
3. mužsky životny
4. mužsky neživotny
You may not notice them, you may not consider them to be genders, but they look like it, they act like it; they're there and they make life very difficult for foreign learners.
If it looks like a duck, walks and swims and quacks like a duck, it's a duck.
I lived in Czechia 10 years and after over half a decade of bloody hard work, I got to beginning B1 level Czech. It has 4 genders and they change adjectives and the accusative declination, and it is not important to me that Czechs don't consider them genders. They're genders. The levels of the hierarchy do not matter, merely the number of nodes.
A comparison: English has no future tense, strictly speaking. But in reality, really, it does: "I will say X". In fact arguably two: "I am going to say X." Technically to a linguist it's not a tense, it's a mode expressed with an auxiliary verb, but that doesn't matter. Acts like a tense. Used like a tense. It's a tense.
Slavic nouns come in arbitrary categories and you need to know which category it's in to conjugate it properly. French and Spanish have 2, German has 3, Czech has 4, Polish has 5. What they are called? Don't care. Not important to me.
I do not know Polish or speak Polish. I am 100% not claiming any authority here.
For one, "męski męskożywotny" is not what it is called, it is just męskożywotny (the gender is already in the word, male male-animate, has a weird ring to it).
But all that means is that the object is of a masculine gender, and is living.
Męskoosobowy (masculine, person) -- małego chłopca (small boy)
Męskozwierzęcy (masculine, animal) -- małego psa (small dog)
Męskorzeczowy (masculine, thing) -- mały dom (small house)
Żeński (feminine) -- małą górę (small hill)
Nijaki (neuter) -- małe zwierzę (small animal)
The three masculine examples are all of the same gender, masculine -- the difference is if they are a person, animal or thing. None of which are genders, a house and a dog are both masculine.
I'm not going to argue about the complexity of Slavic, specially West Slavic languages -- cause they are complicated. :-). But you are absolutely incorrect in saying that we (Czech or Polish) have more than 3 genders. That you don't think it is particularly important is a bit sad, since these are the things that make Slavic such a fun language group.
From the point of Biology: Lisp is a prokaryotic cell - simple, fundamental, highly adaptable.
In Chemistry: Lisp is carbon - versatile, forms the basis of complex structures.
In Geology: Lisp is like bedrock - foundational and supporting diverse structures above it.
In Astronomy: Lisp is a primordial star - ancient, influential, contributing to the formation of newer elements.
In Physics: Lisp is a quark - the basis of all baryonic matter.
</nerd-rant>
if from a syntactic-flavor perspective, endless parentheses turn me off, but also cleanly map to significant indentation (where any new open paren is a new indentation level and a close paren maps to a backdent), has anyone tried a Lisp that uses indentation instead of parens?
I'm probably failing to consider edge cases but it seems like a potentially simple tweak that might make lisps more palatable to many
imagine that, a lisp without parens... (empty cons literals... crap, that's 1 edge case!)
From the start, John MacCarthy believed that Lisp would be programmed using M-expressions and not S-expressions. M-expressions are still quite parenthetical, but have some syntactic sugar for case statements and such.
In the second incarnation of the Lisp project, which was called Lisp 2, MacCarthy's team introduced an Algol-like syntactic layer transpiling to Lisp. This was still in the middle 1960's! The project didn't go anywhere; Lisp 1.5 outlived it, and is the ancestor of most other Lisp stuff.
In the early 1970's, Vaughan Pratt (of "Pratt parser" fame) came up with CGOL: a another alternative programming language syntax layer of Lisp.
Scheme has a "sweet expressions" SRFI 110 which I think was originated by David Wheeler. It is indentation-based syntax.
The Racket language has numerous language front ends, which are indicated/requested in the source file with #lang. I think one of them is sweet expressions or something like it.
Those are just some of the notable things, not counting lesser known individual projects.
Lisp came out in 1960. The s-expression-only syntax was an accident or a discovery - depending on one's view. Over the many years no attempt to add significant indentation syntax without parentheses gained more than a few users. Syntax variants without parentheses (and no significant indentation) only had a marginally better fate. Sometimes it even contributed to the failure of Lisp derived languages (-> Lisp 2, Dylan)...
You would think that there is a limited set of “triangle centers” but he showed us (and he had us discover and draw them out using The Geometer's Sketchpad) dozens of ways to find triangle centers and he had notes on hundreds more definitions of triangle centers.
His approach to teaching was fun and made us want to take on challenging problems. :)
I also don't deal with significant-indentation in languages usually (and have a strong Python distaste); though I've been playing with Roc (https://www.roc-lang.org/), which has this, and have used HAML (https://haml.info/) in the past, where it seemed useful. I suppose auto-indenting is impossible in a significant-indentation language depending on what the editor can intuit based on how the previous line ended, but I don't think I'd need that feature as long as it simply held the current indentation and just let me hit Tab or Backspace. (I could see things becoming a mess if you manage to screw up the indentation, though.)
I did research "sweet expressions" (which are apparently also called T-expressions) and found the prior art there in Scheme and Lisp, and a library called "sweet" for Racket (which is another intriguing lisp dialect!). These might have gotchas, but apparently they've sufficiently solved the problem enough to be usable.
I do simply like how "T-expressions" look. Which is something I guess I care about, although I know that's not a universal among coders. (My guess is that those who care about such things are simply not 100% left-brained about their coding and are invested in the "writing" aspect of the craft.)
Very easily; but the point is that it's very often easy to design things so that the caller doesn't have to care.
For instance, lookup in an associative list can just be (cdr (assoc key alist)).
If the key is not found, assoc returns nil, and so cdr returns nil.
Right, so when we use this shortcut, we have an ambiguity: does the list actually have that key, but associated with the value nil? Or does it not have the key.
Believe it or not, we can design the data representation very easily such that we don't care about the difference between these two cases; we just say we don't have nil as a value; a key with a value nil is as good as a missing key.
This situation is very often acceptable. Because, in fact, data structures are very often heavily restrained in what data types they contain. Whenever we assert that, say, a dictionary has values that are, say, strings, there we have it: values may not be nil because nil is not a string. And so the ambiguity is gone.
A nice situation occurs when keys are associated with lists of values. A key may exist, but be associated with an empty list (which is nil!). Or it may not exist. We can set things up so that we don't care about distinguishing these two. If key K doesn't exist then K is not associated with a list of items, which is practically the same as being associated with an empty list of items. If we split hairs, it isn't, but in a practical application things can be arranged so it doesn't matter.
Even when we have a heterogeneous list in Lisp, like one that can have symbols, numbers, strings or widgets, we can almost always exclude nil as a matter of design, and thus cheerfully use the simpler code.
We cannot exclude nil when a list contains Boolean values, because nil is our false.
We also cannot exclude it when it contains lists, because nil is our empty list.
The beauty is that in many situations, we can arrange not to have to care about the distinction between "item is missing" and "item is false" and "item is an empty list", and then we can write terser code.
When you see such terse code from another programmer, you know instinctively what the deal is with how they are treating nil before even looking at any documentation or test cases.
The Scheme language and it surrounding culture are also culprits. Though Scheme isn't functional, it emphasizes pure programming more than its Lisp family predecessors. The basic language provides tail recursive constructs instead of iterative ones, and demands implementations to implement optimized tail calls.
I doubt it. Firstly, there are entire prefix and postfix natural languages, which have capable native speakers of all intellectual persuations. But in natural languages, sentences do not go to very deep levels of nesting before people get confused.
In programming, we have the deep nesting quite often. Nobody has the mental flexibility to adapt to it. We indent the code instead.
Nobody can read a large Lisp program (or even a small one) if it is flattened into one long line, which is then wrapped to a paragraph.
Within a single line of Lisp, there is rarely much nesting going on where the prefix notation causes a problem. The rest is indentation.
Everyone doing serious programming relies on their editor for that, which helps them spot nesting errors.
Then I still won't believe it's a Lisp syntax problem, unless they have a background of success with other languages.
Some people don't have a knack for programming. Among them, some try this and that, and falter in various ways.
> And the people who like this family of languages are annoyed and offended that other languages that do not require this are hundreds of times more popular and are used by millions of people.
Those other languages are harder because of their syntax.
Languages that remove syntax are an affront to the self-image that people have built up over their mastery of convoluted syntax.
For most ordinary people, learning programming is equated with learning syntax. When they are memorizing things like that >> has a higher precedence than +, they feel they are getting smarter.
The idea that this stuff is not necessary, and that you actually don't know jack if you don't know semantics, is a huge threat.
Once a beginner is off into a syntax-heavy language, chances are high we have lost them forever, due to simple ego effects.
(+ (* 3
(+ (* 2 4)
(+ 3 5)))
(+ (- 10 7)
6))
is a decent point.I mean, how could they present this differently? Pretty much any Lisp book should explain this stuff the same way. Look, we have these parentheses and that's what the machine cares about, but we split across and indent like this.
If someone finds that reformatting to be incomprehensible and unreadable, virtually no different from the original one liner, they may have some cognitive issue (a form of dyslexia or something like it). Likely they will struggle with programming in any language.
I don't suspect it's "cognitively typical" not to find the visual structure of the above formatting to be obviously helpful.
Given the multi-line layout:
(+ (* 3
(+ (* 2 4)
(+ 3 5)))
(+ (- 10 7)
6))
I strongly suspect most ordinary people with neurotypical visual pipelines would find it helpful and more comprehensible over the same expression formatted as one line, regardless of their aptitude for math, or the semantics of programming.It can't be that only a minority of people have it as a "special skill" to see a simple visual pattern with hierarchical grouping and alignment.
The semantics is future, but tense is a matter of syntax.
The modal verb which establishes future semantics it not in a future tense; it is in its dictionary form: to will.
In archaic English we can say things like "As I will it, so it shall be" where the verb isn't acting as a modal. The modal will comes from that one, I believe.
Let me see if I can recall an example. Okay, how about the word for horse, which is kôn, and man which is muž. This is masculine: ten kôn (that[masc] horse), ten muž (that[masc] man).
However, in the third person we have tý muži (those[masc] men) and tie koňe (those[fem? neut?] horses)?
The demonstrative tie is the same like the feminine one, tie ženy (those[fem] women) or neuter tie deti (those children).
Even if that is a special gender difference, it does not fall along the animate versus inanimate line, because horses are clearly animate.
Inanimate objects that are masculine in the singular do fall into this: ten stôl (that[m] table), tie stoly (those[f] tables).
It might be human versus non-human. Collections of non-human male gender things are not themselves males, but neuters.
Suppose you wrote this code in more than two or three places:
(if (and (consp x) (consp (cdr x))
(car (cdr x)))
you might define a function for that. Since there is cadr, you don't have to.Also, that function may be more efficient, especially if our compiler doesn't have good CSE. Even if x is just a local variable, there is the issue that (cdr x) is called twice. A clever compiler will recognize that the value of x has not changed, and generate only one access to the cdr.
The function can be coded to do that even in the absence of such a compiler.
(That is realistic; in the early lifecycle of a language, the quality of library functions can easily outpace the quality of compiler code generation, because the library writers use efficient coding tricks, and perhaps even drop into a lower level language where beneficial.)
If x itself is a complex expression:
(if (and (consp (complex-expr y)) (consp (cdr (complex-expr y)))
(car (cdr (complex-expr y))))
we will likely code that as: (let ((x (complex-expr y)))
...)
The function call gives us all that for free: (cadr (complex-expr y)). The argument expression is evaluated once, and bound to the formal parameter that the function refers to, and the function body can do manual CSE not to access the cdr twice.It's similar to la and le in French. You cannot say "Vive le France"; it has t obe "la France".
They are used as helpers in communicating the gender of a noun. If we say "ten stôl", it reaffirms that the noun is masculine. "tá stôl" is ungrammatical.
Other words are like this. E.g. interrogative wh- words: "ktorý muž?" (which man?) "ktorá žena?" (which woman?)
Lisp is like an entire branch of computer science, about which a lot of people in computer science are ignorant.
> It can't be that only a minority of people have it
Only a minority of people have the ability to understand algebra.
Of them, only a minority can usefully use it and apply it.
Of them, only a minority can formulate an algorithm and construct code to perform it.
Of them, only a minority can tolerate having the helpful algebraic notation removed and replaced with a bare abstract syntax tree decorate with parentheses.
Why do you think most people only understand enough about Lisp to make jokes about all the parens?
Why do you think most people gravitate towards the simplest, shortest, infix-notation language and moved the entire industry on to that?
By coincidence?
Sure. And you are also aware that there are natural languages which are regarded as being very hard for non-native adults to learn, right?
Some natural human languages are easier than others. This is axiomatic.
Some programming languages are easier than others too. Excluding the ones that are designed to be, from INTERCAL to Ook!
Not everyone likes it, or ends up going into a field that requires math, but that's not the same thing as having no ability to understand it.
Why most people only understand enough about Lisp to make jokes about all the parens is like asking why some people only understand enough about Poland to tell jokes, like four Polaks turning a ladder so a fifth one can change a light bulb.
It's usually gratuitous syntax like noun cases, especially when the same feature is not present in any shape in one's native language.
Also writing systems that have large numbers of symbols, which have multiple interpretations.
In any case all mainstream programming languages have prefix notation in the form of function calls. And also statement constructs that begin with a command word followed by argument material.
Imperative sentences in English amount of prefix notation because the subject is omitted (it is implicitly "you") so we're left with verb and object.
It's just that the majority of modern programmers are not concerned with mathematics, and that's perfectly acceptable. Mathematics itself has so many different levels that even mathematicians themselves are not always certain if they are indeed practicing mathematics.
You may be conflating programmers and computer scientists, but this could also be a perfect case of selection bias, where both of us are simultaneously correct and incorrect in our assertions.
You're focussing on the detail while ignoring the general picture.
I am not comparing Lisp to Mandarin Chinese or something. That would be silly.
What I am saying is that there are a whole bunch of languages (both kinds) which presumably seem perfectly easy to those who grew up with them, but if you didn't and you come to them after learning something else that's much simpler and doesn't do the fancy stuff, then they seem really hard. Consistently, for lots of people, regardless of background.
Doesn't matter how good your Arabic, Mandarin will be hard, and vice versa.
https://www.geeksforgeeks.org/hardest-languages-in-the-world...
That list isn't sorted by your source language, your L1. That doesn't matter.
If they come from an infix-notation imperative language then most people are going to find moving to an impurely-functional prefix-notation one is really hard. And most languages are infix-notation imperative languages.
(+ (* 3
(+ (* 2 4)
(+ 3 5)))
(+ (- 10 7)
6))
It's just very basic recognition of shapes suggested by incomplete contours.There are hardly any Lisp programmers today who didn't "come from" prefix languages.
All mainstream languages are heavily steeped in prefix notation. The infix stuff is just a small core, typically. Mainly it is for arithmetic. Sometimes for string manipulation or set operations on containers, perhaps I/O.
Libraries are invoked with f(a, ...) prefix or perhaps obj.f(a, ...) mixed infix and prefix.
Libraries have far more content larger than the infix material.
Even small programs are divided into functions, invoked with prefix. Prefix is relied on for major program organization.
Command languages use prefix: copy filea.txt this/dir.
Statement and definition structures in mainstream languages are prefix: class foo {, goto label, return 42, procedure foo(var x : integrer), ...
The idea that programmers coming to Lisp are confused by prefix does not hold water.