This is cool. I think faster models can unlock entirely new usage paradigms, like how faster search enables incremental search.
replies(1):
Comments are pretty short, but there are many millions of them. So getting high throughput at minimum cost is key.
I'm hoping that Inception might be able to churn through this quickly.
If you folks have other ideas or suggestions, what might also work, I'd love to hear them!
The idea is having a semgrep command line tool. If latencies are dropping dramatically, it might be feasible.