←back to thread

549 points thecr0w | 1 comments | | HN request time: 0s | source
Show context
sigseg1v ◴[] No.46183772[source]
Curious if you've tested something such as:

- "First, calculate the orbital radius. To do this accurately, measure the average diameter of each planet, p, and the average distance from the center of the image to the outer edge of the planets, x, and calculate the orbital radius r = x - p"

- "Next, write a unit test script that we will run that reads the rendered page and confirms that each planet is on the orbital radius. If a planet is not, output the difference you must shift it by to make the test pass. Use this feedback until all planets are perfectly aligned."

replies(4): >>46183892 #>>46184167 #>>46184209 #>>46185524 #
turnsout ◴[] No.46183892[source]
Yes, this is a key step when working with an agent—if they're able to check their work, they can iterate pretty quickly. If you're in the loop, something is wrong.

That said, I love this project. haha

replies(1): >>46184158 #
monsieurbanana ◴[] No.46184158[source]
I'm trying to understand why this comment got downvoted. My best guess is that "if you're in the loop, something is wrong" is interpreted as there should be no human involvement at all.

The loop here, imo, refers to the feedback loop. And it's true that ideally there should be no human involvement there. A tight feedback loop is as important for llms as it is for humans. The more automated you make it, the better.

replies(1): >>46184956 #
1. turnsout ◴[] No.46184956{3}[source]
Yes, maybe I goofed on the phrasing. If you're in the feedback loop, something is wrong. Obviously a human should be "in the loop" in the sense that they're aware of and reviewing what the agent is doing.