Most active commenters
  • Filligree(3)
  • asdff(3)

←back to thread

728 points freetonik | 13 comments | | HN request time: 0.813s | source | bottom
1. philjohn ◴[] No.44977150[source]
I like the pattern of including each prompt used to make a given PR, yes, I know that LLM's aren't deterministic, but it also gives context of the steps required to get to the end state.
replies(4): >>44977198 #>>44979222 #>>44980141 #>>44980617 #
2. mock-possum ◴[] No.44977198[source]
I’m using specsytory in vscode + cursor for this - it keeps a nice little md doc of all your LLM interactions, and you can check that into source control if you like so it’s included in pull requests, and can be referenced during code review.
3. Filligree ◴[] No.44979222[source]
It's ridiculous and impractical, honestly. A single AI-generated PR would likely involve at least 10-20 prompts, interspersed with testing, manual edits to context / guideline files without which those prompts don't have the same effect, manual coding, and more. A screen recording would do better.
replies(1): >>44979691 #
4. asdff ◴[] No.44979691[source]
Is there really no logging capability with these tools that would track all of that prompting/testing/editing/inputting?
replies(1): >>44980023 #
5. Filligree ◴[] No.44980023{3}[source]
Sure, screen recorders exist.

And if contributions are that unwelcome, then it's better not to contribute. There has to be some baseline level of trust that the contributor is trying to do the right thing; I get enough spying from the corporations in my life.

replies(1): >>44980812 #
6. verdverm ◴[] No.44980141[source]
I've been doing that for most of the commits in this project as an experiment, gemini, human, or both. Not sure what I'm going to do with that history, but I did at least want to start capturing it

https://github.com/blebbit/at-mirror/commits/main/

7. mhh__ ◴[] No.44980617[source]
Doesn't work. People will include fake prompts, the real ones are way too personal. You can learn a lot from how people use these tools.
replies(1): >>44981612 #
8. asdff ◴[] No.44980812{4}[source]
I'm not talking screen recorder but a log file where I could be given it and use it for input and repeat the work exactly as it was done. Sort of like how one could do the same with their bash history file to repeat an initially exploratory analysis effort using the same exact commands. I'm surprised that isn't already a capability given the business interest with AI. One would think they would like to cache these prompt flows to ensure long term fidelity in process.
replies(1): >>44985132 #
9. nullc ◴[] No.44981612[source]
That's like saying "sorry, source code is too personal. In my 'open' project you get only binaries".

... and then I think about all the weights only "open" AI projects and walk off in disgust.

replies(1): >>44983255 #
10. mhh__ ◴[] No.44983255{3}[source]
Keep in mind that in industries where people code but aren't really programmers this literally does happen, sometimes very "big" people will be basically scared to share their code because it won't be very good.

But anyway what I mean is that code is us speaking like a computer, LLMs are the other way around, you can see a lot from how someone interacts with the machine.

replies(1): >>44983298 #
11. nullc ◴[] No.44983298{4}[source]
not just non-programmers. It's a common problem with jr programmers and good programmer internships (for example) make it a point of forcing the interns to expose themselves ASAP to get over it.

I think if everyone goes into it knowing that it'll be part of what they publish it would be less of an issue.

I mean, unless you're all a bunch of freaks who have instructed your LLM to cosplay as Slave Leia and can't work otherwise, in which case your issues are beyond my pay grade. :P

12. Filligree ◴[] No.44985132{5}[source]
Does that exist for VSCode?

If not, why would it exist for VSCode + a variety of CLI tools + AI? Anyhow, saving the exact prompt isn't super useful; the response is stochastic.

replies(1): >>45002753 #
13. asdff ◴[] No.45002753{6}[source]
VScode has a history capability. You can't save a prompt + a seed to recover the response? That seems like it would be highly useful.