←back to thread

648 points bradgessler | 1 comments | | HN request time: 0.419s | source
Show context
bsenftner ◴[] No.44014381[source]
I believe the author is awash in a sea they do not understand, and that is the cause of their discomfort. When they describe their ideas being fully realized by LLMs, are they really, or just appearing as such because the words and terms arrive in a similar and expected manner as their prompt?

Performing any type of intellectual philosophic or exploratory work with LLMs is extremely subtle, largely because you nor they know what you are seeking, and the discovery process with LLMs is not writing prompts and varying one's prompts in trial manners to hopefully get "something else, something better" <- that is pure incomprehension of how they work, and how to work with them.

Very few seem to be realizing the mirror aspects embodied within LLMs: they will mirror you back, and if you are unaware of this, you may not be getting the replies you really seek, you're receiving "comfort replies" and replies mirroring your metadata (style, nuance) more than the factual logic of your requests, if any factual requests are made.

There is an entire body of work, multiple careers worth of human effort, to document the new subtle logical keys to working with LLMs. These are new logical constructs that have never existed before, not even fictionally, not realized as they are now, with all the implications and details bare, in our faces, yet completely misunderstood as people attempt old imperative methods that will not work with with this new entity with completely different characteristics than anything reality has any experience.

A major issue with getting developers to effectively use LLMs is the fact that many developers are weak to terrible communicators themselves. LLMs are fantastic communicators, who will mirror their audience in an attempt to be better understood, but when that audience is a weak communicator the entire process disintegrates. That is, what I suspect is happening with the blog post author. An inability to be discriminate in their language to the degree they parcel out the easy immediate sophomore level replies, and then arrive at a context within the LLM's capacity that is the integrity of context they seek, but that requires them to meet that intellectually and linguistically or that LLM context is destroyed. So subtle.

replies(2): >>44014454 #>>44014769 #
daveguy ◴[] No.44014769[source]
> LLMs are fantastic communicators, who will mirror their audience in an attempt to be better understood, but...

This seems to represent a complete misunderstanding of what LLMs are and do. There is not a single LLM that "attempts" to be understood or anything else other than produce tokens. They have no autonomy.

replies(1): >>44015078 #
1. bsenftner ◴[] No.44015078[source]
Don't nit pick on my use of the term "attempt", I'm describing how the words used in a prompt generate the context that then generates the reply. In this specific human language context my use of "attempt" is describing the LLMs choice of words in it's reply. I know there is no "attempting" as a human would "attempt". I'm paraphrasing and alluding rather than writing a formal and complex essay.