←back to thread

214 points meetpateltech | 1 comments | | HN request time: 0.211s | source
Show context
sajithdilshan ◴[] No.44367220[source]
I wonder what kind of guardrails (like Three Laws of Robotics) there are to prevent the robots going crazy while executing the prompts
replies(5): >>44367242 #>>44367273 #>>44368189 #>>44368989 #>>44377071 #
hlfshell ◴[] No.44368189[source]
The generally accepted term for the research around this in robotics is Constitutional AI (https://arxiv.org/abs/2212.08073) and has been cited/experimented with in several robotics VLAs.
replies(1): >>44370496 #
1. JumpCrisscross ◴[] No.44370496[source]
Is there any evidence we have the technical ability to put such ambiguous guardrails on LLMs?