←back to thread

Hofstadter on Lisp (1983)

(gist.github.com)
372 points Eric_WVGG | 1 comments | | HN request time: 0.205s | source
Show context
taeric ◴[] No.41860340[source]
I do think LISP remains the major language that can encompass the strange loop idea he explored in his work. I know LISP is not the only homoiconic language, but it is the biggest that people know how to use where the "eval" function doesn't take in a string that has to be parsed.

I hate that people are convinced LISP == functional programming, writ large. Not that I dislike functional programming, but the symbolic nature of it is far more interesting to me. And it amuses me to no end that I can easily make a section of code that is driven by (go tag) sections, such that I can get GOTO programming in it very easily.

replies(6): >>41860620 #>>41860888 #>>41861136 #>>41861546 #>>41862219 #>>41893256 #
bbor ◴[] No.41862219[source]
I love and relate to any impassioned plea on SWE esoterica, so this seems like as good of a place as any to ask: What, in practice, is this deep level of "homoiconic" or "symbolic" support used for that Python's functools (https://docs.python.org/3/library/functools.html) doesn't do well? As someone building a completely LISPless symbolic AGI (sacrilege, I know), I've always struggled with this and would love any pointers the experts here have. Is it something to do with Monads? I never did understand Monads...

To make this comment more actionable, my understanding of Python's homoiconic functionality comes down to these methods, more-or-less:

1. Functions that apply other functions to iterables, e.g. filter(), map(), and reduce(). AKA the bread-n-butter of modern day JavaScript.

2. Functions that wrap a group of functions and routes calls accordingly, e.g. @singledispatch.

3. Functions that provide more general control flow or performance conveniences for other functions, e.g. @cache and and partial().

3. Functions that arbitrarily wrap other functions, namely wraps().

Certainly not every language has all these defined in a standard library, but none of them seem that challenging to implement by hand when necessary -- in other words, they basically come down to conviences for calling functions in weird ways. Certainly none of these live up to the glorious descriptions of homoiconic languages in essays like this one, where "self-introspection" is treated as a first class concern.

What would a programmer in 2024 get from LISP that isn't implemented above?

replies(2): >>41862431 #>>41865845 #
taeric ◴[] No.41862431[source]
I'm basically a shill for my one decent blog post from a while back. :D

https://taeric.github.io/CodeAsData.html

The key for me really is in the signature for "eval." In python, as an example, eval takes in a string. So, to work with the expression, it has to fully parse it with all of the danger that takes in. For lisp, eval takes in a form. Still dangerous to evaluate random code, mind. But you can walk the code without evaluating it.

replies(1): >>41864257 #
bbor ◴[] No.41864257[source]
HackerNews sadly never fails to disappoint. Thanks for taking the time to share, that was exactly what I was looking for! Would endorse this link for any lurkers.

The LISP (elisp?) syntax itself gives me a headache to parse so I think I'll stay away for now, but I'll definitely be thinking about how to build similar functionality into my high level application code -- self modification is naturally a big part of any decent AGI project. At the risk of speaking the obvious, the last sentence was what drove it home for me:

    It is not just some opaque string that gets to enjoy all of the benefits of your language. It is a first class list of elements that you can inspect and have fun with. 
I'm already working with LLM-centric "grammars" representing sets of standpoint-specific functions ("pipelines"), but so far I've only been thinking about how to construct, modify, and employ them. Intelligently composing them feels like quite an interesting rabbit hole... Especially since they mostly consist of prose in minimally-symbolic wrappers, which are probably a lot easier for an engineer to mentally model--human or otherwise. Reminds me of the words of wonderful diehard LISP-a-holic Marvin Minsky:

  The future work of mind design will not be much like what we do today. ...what we know as programming will change its character entirely-to an activity that I envision to be more like sculpturing.
  To program today, we must describe things very carefully because nowhere is there any margin for error. But once we have modules that know how to learn, we won’t have to specify nearly so much-and we’ll program on a grander scale, relying on learning to fill in details.
In other words: What if the problem with Lisp this whole time really was the parentheses? ;)

source is Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy: https://onlinelibrary.wiley.com/doi/full/10.1609/aimag.v12i2...

replies(3): >>41864594 #>>41866098 #>>41867392 #
BoiledCabbage ◴[] No.41866098[source]
> HackerNews sadly never fails to disappoint.

FYI, that means the opposite of how you used it.

"Never fails to disappoint" is an idiom that means a person or thing consistently disappoints.

replies(2): >>41867588 #>>41868307 #
1. blenderob ◴[] No.41868307[source]
But they said "sadly". They said "HackerNews sadly never fails to disappoint".

If they really meant the opposite that "HackerNews never disappoints" why would that be "sad"?

But since they said "sadly" I think they really meant what they wrote which is that HN consistently disappoints them. Maybe they meant that the Lisp articles on HN do not go into describing exactly how "code as data" works in actual practical matters and that is consistently disappointing? And they are finally happy when someone explained "code as data" to them in the comments section?