←back to thread

181 points thunderbong | 4 comments | | HN request time: 0.001s | source
Show context
numpad0 ◴[] No.45083302[source]
I don't feel good doing it, but is anyone else feeling not capitalizing text, maintaining a slightly abrasive attitude, and consciously stealing credits, yield better results from coding agents? e.g. "i want xxx implemented, can you do", "ok you do" than "I'm wondering if..." etc.
replies(2): >>45083401 #>>45084903 #
SV_BubbleTime ◴[] No.45083401[source]
There is so much subjective placebo with “prompt engineering” that anyone pushing any one thing like this just shows me they haven’t used it enough yet. No offense, just seeing it everywhere.

Better results if you… tip the AI, offer it physical touch, you need to say the words “go slow and take a deep breath first”…

It’s a subjective system without control testing. Humans are definitely going to apply religion, dogma, and ritual to it.

replies(4): >>45083461 #>>45083616 #>>45083734 #>>45083934 #
1. kachapopopow ◴[] No.45083461[source]
I tell my agent to off it self every couple of hours, it's definitely placebo as you're just introducing noise which might or might not be good. Adding hmm, <prompt> has been my goto for a bit if I want it to force to give me different results cause it appears to trigger some latent regions of the llms.
replies(1): >>45083667 #
2. SV_BubbleTime ◴[] No.45083667[source]
This seems to be exactly what I’m talking about though. We made a completely subjective system and now everyone has completely subjective advice about what works.

I’m not saying introducing noise isn’t a valid option, just doing it in ‘X’ or ‘y’ method as dogma is straight bullshit.

replies(1): >>45117299 #
3. kachapopopow ◴[] No.45117299[source]
I was thinking about this and I disagree, if you can force "better" paths for programming based on the prompt I think that might as well give you better results.
replies(1): >>45128218 #
4. SV_BubbleTime ◴[] No.45128218{3}[source]
IF, MIGHT, “BETTER”

… right. Now you are on the same page. Maybe adding fluff helps, maybe it hurts. You have no idea of knowing before or after the prompt.

Show me research that says over thousands of benchmarks that pole riding you AI before the request gives better responses.

It’s placebo.