←back to thread

283 points summarity | 5 comments | | HN request time: 0s | source
Show context
ryandrake ◴[] No.44369008[source]
Receiving hundreds of AI generated bug reports would be so demoralizing and probably turn me off from maintaining an open source project forever. I think developers are going to eventually need tools to filter out slop. If you didn’t take the time to write it, why should I take the time to read it?
replies(7): >>44369097 #>>44369153 #>>44369155 #>>44369386 #>>44369772 #>>44369954 #>>44370907 #
1. jgalt212 ◴[] No.44369153[source]
One would think if AI can generate the slop it could also triage the slop.
replies(1): >>44369194 #
2. err4nt ◴[] No.44369194[source]
How does it know the difference?
replies(3): >>44369356 #>>44371873 #>>44380338 #
3. scubbo ◴[] No.44369356[source]
I'm still on the AI-skeptic side of the spectrum (though shifting more towards "it has some useful applications"), but, I think the easy answer is - if different models/prompts are used in generation than in quality-/correctness-checking.
4. jgalt212 ◴[] No.44371873[source]
I think Claude, given enough time to mull it over, could probably come up with some sort of bug severity score.
5. beng-nl ◴[] No.44380338[source]
This might not always work, but whenever possible, a working exploit could be demanded, working in a form that can be automatically verified to work.