Nonetheless, I'm particularly curious which cases the AI tool can find things that are not easy to find via find & grep (eg: finding URLs that are created via string concatenation, those that do not appear as a string literal in the source code)
Perhaps a larger question there, what's the overall false negative rate of a tool like this? Are there places where it is particularly good and/or particularly poor?
edits: brevity & clarity
Lsp-mode will schedule one request per keypress but then cancel that request at the next keypress. But since the python LSP server doesn't do async, it handles cancel requests by ignoring them
It's not as slick as SQL on a RDBMS, but very close, and integrates well into e.g. vim, so I can directly pull in output from the tools and add notes when I'm building up my reports. Finding partial URL:s, suspicious strings like API keys, SQL query concatenation and the like is usually trivial.
For me to switch to another toolset there would have to be very strong guarantees that the output is correct, deterministic and the full set of results, since this is the core basis for correctness in my risk assessments and value estimations.
I wish every language just came with a good ctags solution that worked with all IDEs. When this is set up properly I rarely need more power than a shortcut to look up tags.