←back to thread

262 points lawrencechen | 1 comments | | HN request time: 0s | source

0github.com is a pull request viewer that color-codes every diff line/token by how much human attention it probably needs. Unlike PR-review bots, we try to flag not just by "is it a bug?" but by "is it worth a second look?" (examples: hard-coded secret, weird crypto mode, gnarly logic, ugly code).

To try it, replace github.com with 0github.com in any pull-request URL. Under the hood, we split the PR into individual files, and for each file, we ask an LLM to annotate each line with a data structure that we parse into a colored heatmap.

Examples:

https://0github.com/manaflow-ai/cmux/pull/666

https://0github.com/stack-auth/stack-auth/pull/988

https://0github.com/tinygrad/tinygrad/pull/12995

https://0github.com/simonw/datasette/pull/2548

Notice how all the example links have a 0 prepended before github.com. This navigates you to our custom diff viewer where we handle the same URL path parameters as github.com. Darker yellows indicate that an area might require more investigation. Hover on the highlights to see the LLM's explanation. There's also a slider on the top left to adjust the "should review" threshold.

Repo (MIT license): https://github.com/manaflow-ai/cmux

Show context
mmastrac ◴[] No.45763726[source]
I tried it on a low-complexity Rust PR I worked on a few months back and it did a pretty good job. I'd probably change where the highlights live (for example x.y.z() -> x.w.z() should highlight y/w in a lot of cases).

For the most part, it seems to draw the eye to the general area where you need to look closer. It found a near-invisible typo in a coworker's PR which was kind of interesting as well.

https://0github.com/geldata/gel-rust/pull/530

It seems to flag _some_ deletions as needing attention, but I feel like a lot of them are ignored.

Is this using some sort of measure of distance between the expected token in this position vs the actual token?

EDIT: Oh, I guess it's just an LLM prompt? I would be interested to see an approach where the expected token vs actual token generates a heatmap.

replies(1): >>45763884 #
1. lawrencechen ◴[] No.45763884[source]
Happy to hear!

> Is this using some sort of measure of distance between the expected token in this position vs the actual token?

The main implementation is in this file: https://github.com/manaflow-ai/cmux/blob/main/apps/www/lib/s...

EDIT: yeah it's just a LLM prompt haha

Just a simple prompt right now, but I think we could try an approach where we directly see which tokens might be hallucinated. Gonna try to find the paper for this idea. Might be kinda analogous to the "distance between the expected token in this position vs the actual token."