←back to thread

283 points summarity | 1 comments | | HN request time: 0.21s | source
Show context
tecleandor ◴[] No.44369385[source]
First:

> To bridge that gap, we started dogfooding XBOW in public and private bug bounty programs hosted on HackerOne. We treated it like any external researcher would: no shortcuts, no internal knowledge—just XBOW, running on its own.

Is it dogfooding if you're not doing it to yourself? I'd considerit dogfooding only if they were flooding themselves in AI generated bug reports, not to other people. They're not the ones reviewing them.

Also, honest question: what does "best" means here? The one that has sent the most reports?

replies(2): >>44369528 #>>44372234 #
jamessinghal ◴[] No.44369528[source]
Their success rates on HackerOne seem widely varying.

  22/24 (Valid / Closed) for Walt Disney

  3/43 (Valid / Closed) for AT&T
replies(2): >>44369569 #>>44370666 #
thaumasiotes ◴[] No.44369569[source]
> Their success rate on HackerOne seems widely varying.

Some of that is likely down to company policies; Snapchat's policy, for example, is that nothing is ever marked invalid.

replies(1): >>44369633 #
jamessinghal ◴[] No.44369633[source]
Yes, I'm sure anyone with more HackerOne experience can give specifics on the companies' policies. For now, those are the most objective measures of quality we have on the reports.
replies(1): >>44369795 #
1. moyix ◴[] No.44369795[source]
This is discussed in the post – many came down to individual programs' policies e.g. not accepting the vulnerability if it was in a 3rd party product they used (but still hosted by them), duplicates (another researcher reported the same vuln at the same time; not really any way to avoid this), or not accepting some classes of vuln like cache poisoning.