Most active commenters
  • msgodel(3)

←back to thread

214 points meetpateltech | 15 comments | | HN request time: 1.869s | source | bottom
1. sajithdilshan ◴[] No.44367220[source]
I wonder what kind of guardrails (like Three Laws of Robotics) there are to prevent the robots going crazy while executing the prompts
replies(5): >>44367242 #>>44367273 #>>44368189 #>>44368989 #>>44377071 #
2. hn_throwaway_99 ◴[] No.44367242[source]
A power cord?
replies(1): >>44367251 #
3. sajithdilshan ◴[] No.44367251[source]
what if they are battery powered?
replies(2): >>44367368 #>>44367908 #
4. ctoth ◴[] No.44367273[source]
The laws of robotics were literally designed to cause conflict and facilitate strife in a fictional setting--I certainly hope no real goddamn system is built like that,.

> To ensure robots behave safely, Gemini Robotics uses a multi-layered approach. "With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period," says Parada. "And then you have it talk to a VLA that actually produces options, and then that VLA calls a low-level controller, which typically has safety critical components, like how much force you can move or how fast you can move this arm."

replies(1): >>44367333 #
5. conception ◴[] No.44367333[source]
Of course someone will. The terror nexus doesn’t build itself, yet, you know.
6. msgodel ◴[] No.44367368{3}[source]
Usually I put master disconnect switches on my robots just to make working on them safe. I use cheap toggle switches though I'm too cheap for the big red spiny ones.
replies(1): >>44367620 #
7. pixl97 ◴[] No.44367620{4}[source]
[Robot learns to superglue the switch open]
replies(1): >>44367832 #
8. msgodel ◴[] No.44367832{5}[source]
It's only going to do that if you RL it with episodes that include people shutting it down for safety. The RL I've done with my models are all simulations that don't even simulate the switch.
replies(1): >>44369046 #
9. bigyabai ◴[] No.44367908{3}[source]
That's what we use twelve gauge buckshot for, here in America.
10. hlfshell ◴[] No.44368189[source]
The generally accepted term for the research around this in robotics is Constitutional AI (https://arxiv.org/abs/2212.08073) and has been cited/experimented with in several robotics VLAs.
replies(1): >>44370496 #
11. asadm ◴[] No.44368989[source]
in practice, those laws are bs.
12. pixl97 ◴[] No.44369046{6}[source]
Which will likely work for only on machine AI, but it seems to me any very complicated actions/interactions with the world may require external interactions with LLMs which know these kind of actions. Or in the future the models will be far larger and more expansive on device containing this kind of knowledge.

For example, what if you need to train the model to keep unauthorized people from shutting it off?

replies(1): >>44369171 #
13. msgodel ◴[] No.44369171{7}[source]
Having a robot near people with no master off switch sounds like a dumb idea.
14. JumpCrisscross ◴[] No.44370496[source]
Is there any evidence we have the technical ability to put such ambiguous guardrails on LLMs?
15. Symmetry ◴[] No.44377071[source]
Current guardrails are more IEC 61508 than anything like the three laws.