←back to thread

222 points futurisold | 8 comments | | HN request time: 0.76s | source | bottom
Show context
sram1337 ◴[] No.44400463[source]
This is the voodoo that excites me.

Examples I found interesting:

Semantic map lambdas

  S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])
  print(S.map('convert all fruits to vegetables'))
  # => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']

comparison parameterized by context

  # Contextual greeting comparison
  greeting = Symbol('Hello, good morning!')
  similar_greeting = 'Hi there, good day!'

  # Compare with specific greeting context
  result = greeting.equals(similar_greeting, context='greeting context')
  print(result) # => True

  # Compare with different contexts for nuanced evaluation
  formal_greeting = Symbol('Good morning, sir.')
  casual_greeting = 'Hey, what\'s up?'

  # Context-aware politeness comparison
  politeness_comparison = formal_greeting.equals(casual_greeting, context='politeness level')
  print(politeness_comparison) # => False
bitwise ops

  # Semantic logical conjunction - combining facts and rules
  horn_rule = Symbol('The horn only sounds on Sundays.', semantic=True)
  observation = Symbol('I hear the horn.')
  conclusion = horn_rule & observation # => Logical inference
`interpret()` seems powerful.

OP, what inspired you to make this? Where are you applying it? What has been your favorite use case so far?

replies(3): >>44400592 #>>44401514 #>>44401936 #
1. haileys ◴[] No.44401514[source]
Why is carrot the vegetablefication of apple?
replies(3): >>44401518 #>>44401567 #>>44401606 #
2. herval ◴[] No.44401518[source]
Also if you run it twice, is it gonna be a carrot again?
replies(3): >>44401564 #>>44403465 #>>44405055 #
3. ◴[] No.44401564[source]
4. pfdietz ◴[] No.44401567[source]
Are you asking for the root cause?
5. HappMacDonald ◴[] No.44401606[source]
I think it's interpreting the command as "replace each fruit with a vegetable", and it might intuit "make the resulting vegetables unique from one another" but otherwise it's not trying to find the "most similar" vegetable to every fruit or anything like that.
replies(1): >>44403378 #
6. futurisold ◴[] No.44403378[source]
This is the correct view. Since the instruction was ambiguous, the LLM did its best to satisfy it -- and it did.
7. futurisold ◴[] No.44403465[source]
It's subjected to randomness. But you're ultimately in control of the LLMs's hyperparams -- temperature, top_p, and seed -- so, you get deterministic outputs if that's what you need. However, there are downsides to this kind of LLM deterministic tweaks because of the inherent autoregressive nature of the LLM.

For instance, with temperature 1 there *could be* a path that satisfies your instruction which otherwise gets missed. There's interesting work here at the intersection of generative grammars and LLMs, where you can cast the problem as an FSM/PA automaton such that you only sample from that grammar with the LLM (you use something like logits_bias to turn off unwanted tokens and keep only those that define the grammar). You can define grammars with libs like lark or parsimonious, and this was how people solved JSON format with LLMs -- JSON is a formal grammar.

Contracts alleviate some of this through post validation, *as long as* you find a way to semantically encode your deterministic constraint.

8. d0100 ◴[] No.44405055[source]
Since these seem like short prompts, you can send as context data that was correct on past prompts

You can create a test suite for your code that will compile correct results according to another prompt or dictionary verification

  t.test(
     Symbol(['apple', 'banana', 'cherry', 'cat', 'dog']).map('convert all fruits to vegetables'),
     "list only has vegetable and cat,dog"
  )