Most active commenters

    ←back to thread

    283 points summarity | 26 comments | | HN request time: 3.182s | source | bottom
    1. ryandrake ◴[] No.44369008[source]
    Receiving hundreds of AI generated bug reports would be so demoralizing and probably turn me off from maintaining an open source project forever. I think developers are going to eventually need tools to filter out slop. If you didn’t take the time to write it, why should I take the time to read it?
    replies(7): >>44369097 #>>44369153 #>>44369155 #>>44369386 #>>44369772 #>>44369954 #>>44370907 #
    2. triknomeister ◴[] No.44369097[source]
    Eventually projects who can afford the smugness are going to charge people to be able to talk to open source developers.
    replies(1): >>44369175 #
    3. jgalt212 ◴[] No.44369153[source]
    One would think if AI can generate the slop it could also triage the slop.
    replies(1): >>44369194 #
    4. teeray ◴[] No.44369155[source]
    You see, the dream is another AI that reads the report and writes the issue in the bug tracker. Then another AI implements the fix. A third AI then reviews the code and approves and merges it. All without human interaction! Once CI releases the fix, the first AI can then find the same vulnerability plus a few new and exciting ones.
    replies(1): >>44369220 #
    5. tough ◴[] No.44369175[source]
    isnt that called enterprise support / consulting
    replies(2): >>44369456 #>>44370077 #
    6. err4nt ◴[] No.44369194[source]
    How does it know the difference?
    replies(3): >>44369356 #>>44371873 #>>44380338 #
    7. dingnuts ◴[] No.44369220[source]
    This is completely absurd. If generating code is reliable, you can have one generator make the change, and then merge and release it with traditional software.

    If it's not reliable, how can you rely on the written issue to be correct, or the review, and so how does that benefit you over just blindly merging whatever changes are created by the model?

    replies(2): >>44369277 #>>44369684 #
    8. tempodox ◴[] No.44369277{3}[source]
    Making sense is not required as long as “AI” vendors sell subscriptions.
    9. scubbo ◴[] No.44369356{3}[source]
    I'm still on the AI-skeptic side of the spectrum (though shifting more towards "it has some useful applications"), but, I think the easy answer is - if different models/prompts are used in generation than in quality-/correctness-checking.
    10. Nicook ◴[] No.44369386[source]
    Open source maintainers have been complaining about this for a while. https://sethmlarson.dev/slop-security-reports. I'm assuming the proliferation of AI will have some significant changes on/already has had for open source projects.
    replies(1): >>44375089 #
    11. ◴[] No.44369456{3}[source]
    12. croes ◴[] No.44369684{3}[source]
    That’s why parent wrote it’s a dream.

    It’s not real.

    But you can bet someone will sell that as the solution.

    13. tptacek ◴[] No.44369772[source]
    These aren't like Github Issues reports; they're bug bounty programs, specifically stood up to soak up incoming reports from anonymous strangers looking to make money on their submissions, with the premise being that enough of those reports will drive specific security goals (the scope of each program is, for smart vendors, tailored to engineering goals they have internally) to make it worthwhile.
    replies(1): >>44370208 #
    14. moyix ◴[] No.44369954[source]
    All of these reports came with executable proof of the vulnerabilities – otherwise, as you say, you get flooded with hallucinated junk like the poor curl dev. This is one of the things that makes offensive security an actually good use case for AI – exploits serve as hard evidence that the LLM can't fake.
    replies(1): >>44376326 #
    15. triknomeister ◴[] No.44370077{3}[source]
    This is without the enterprise.
    replies(1): >>44370390 #
    16. ryandrake ◴[] No.44370208[source]
    Got it! The financial incentive will probably turn out to be a double edged sword. Maybe in the pre-AI age, it’s By Design to drive those goals, but I bet the ability to automate submissions will inevitably alter the rules of these programs.

    I think within the next 5 years or so, we are going to see a societal pattern repeating: any program that rewards human ingenuity and input will become industrialized by AI to the point where it becomes a cottage industry of companies flooding every program with 99% AI submissions. What used to be lone wolves or small groups of humans working on bounties will become truckloads of AI generated “stuff” trying to maximize revenue.

    replies(2): >>44371154 #>>44371611 #
    17. tough ◴[] No.44370390{4}[source]
    gotchu, maybe i could see github donations enabling issue creation or wahtever in the future idk

    but foss is foss, i guess source available doesnt mean we have to read your messages see sqlite (wont even take PR's lol)

    18. bawolff ◴[] No.44370907[source]
    If you think the AI slop is demoralizing, you should see the human submissions bug bounties get.

    There is a reason companies like hackerone exist - its because dealing with the submissions is terrible.

    19. dcminter ◴[] No.44371154{3}[source]
    I'm wary of a lot of AI stuff, but here:

    > What used to be lone wolves or small groups of humans working on bounties will become truckloads of AI generated “stuff” trying to maximize revenue.

    You're objecting to the wrong thing. The purpose of a bug bounty programme is not to provide a cottage industry for security artisans - it's to flush out security vulnerabilities.

    There are reasonable objections to AI automation in this space, but this is not one of them.

    20. t0mas88 ◴[] No.44371611{3}[source]
    Might be fixable by adding a $ 100 submission fee that is returned when you're proving working exploit code. Would make the Curl team a lot of money.
    replies(1): >>44378062 #
    21. jgalt212 ◴[] No.44371873{3}[source]
    I think Claude, given enough time to mull it over, could probably come up with some sort of bug severity score.
    22. nestorD ◴[] No.44375089[source]
    Yes! I recently had to manually answer and close a Github issue telling me I might have pushed an API key to github. No, "API_KEY=put-your-key-here;" is a placeholder and I should not have to waste time writing that.
    23. eeeeeeehio ◴[] No.44376326[source]
    Is "proof of vulnerability" a marketing term, or do you actually claim that XBOW has a 0% false positive rate? (i.e. "all" reports come with a PoV, and this PoV "proves" there is a vulnerability?)
    24. billy99k ◴[] No.44378062{4}[source]
    I've been on Hackerone for almost 8 years and I think the problem with this is that too many companies won't pay for legitimate bugs, even when you have a working exploit.

    I had one critical bug take 3 years to get a pay out. I had a full walkthrough with videos and report. The company kept stalling and at one point told me that because they completely had the app remade, they weren't going to pay me anything.

    Hackerone doesn't really protect the researcher either. I was told multiple times that there was 'nothing they could do'.

    I eventually got paid, but this is pretty normal behavior with regards to bug bounty. Too many companies use it for free security work.

    replies(1): >>44379665 #
    25. tptacek ◴[] No.44379665{5}[source]
    I do think HackerOne is problematic, in that it pushes companies that don't really understand bug bounties to stand up bounty programs without a clear reason. If you're doing a serious bounty, your incentive is to pay out. But a lot of companies do these bounties because they just think they're supposed to.

    Most companies should not do bug bounties.

    26. beng-nl ◴[] No.44380338{3}[source]
    This might not always work, but whenever possible, a working exploit could be demanded, working in a form that can be automatically verified to work.