←back to thread

Days since last GitHub incident

(github-incidents.pages.dev)
212 points AquiGorka | 5 comments | | HN request time: 0.001s | source
Show context
llbbdd ◴[] No.46234360[source]
I've gotten accustomed lately to spending a lot of time in the Github Copilot / agent management page. In particular I've been having a lot of fun using agents to browse some of my decade-old throwaway projects; telling it to "setup playwright, write some tests, record screenshots/videos and commit them to the repo" works every time and it's a great way to browse memory lane without spending my own time getting some of these projects building and running again.

However this means I'm now using the Github website and services 1000x more than I was previously, and they're trending towards having coin-flip uptime stats.

If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.

Is there a fully local equivalent out-of-the-box experience that anyone can vouch for? I've used local agents primarily through VSCode, but AFAIK that's limited to running a single active agent over your repo, and obviously limited by the constraints of running on a single M1 laptop I currently use. I know at least some people are managing local fleets of agents in some manner, but I really like how immensely easy Github has made it.

replies(3): >>46234554 #>>46234651 #>>46234652 #
colechristensen ◴[] No.46234652[source]
An NVIDIA DGX Spark is $4000, pair that with a relatively cheap second box to run GitLab in the corner and you would have pretty good local AI inference setup. (you'd probably have to write a nontrivial amount of software to get your setup where you want)

The local models are just right on the edge of being really useful, there's a tipping point to where accuracy is high enough so that getting things done is easy vs models getting continuously stuck. We're in the neighborhood.

Alternatively, just have local GitLab and use one of the many APIs, those are much more stable than github. Honestly just get yourself a Claude subscription.

replies(2): >>46235613 #>>46235761 #
smcleod ◴[] No.46235761[source]
The DGX Spark is not good for inference though it's very bandwidth limited - around the same as a lower end MacBook Pro. You're much better off with a Apple silicon for performance and memory size at the moment but I'd recommend holding off until the M5 Max comes out early in the early as the M5 has vastly superior performance to any other Apple silicon chip thanks to its matmul instruction set.
replies(1): >>46235996 #
1. llbbdd ◴[] No.46235996[source]
Oof, I was already considering an upgrade from the M1 but was hoping I couldn't be convinced to go for the top of the line. Is the performance jump from the M# -> M# Max chips that substantial?
replies(2): >>46240957 #>>46241903 #
2. baby_souffle ◴[] No.46240957[source]
> Is the performance jump from the M# -> M# Max chips that substantial

From m1? Yes, absolutely. M3 is marginal now but m5 will probably make it definite.

3. smcleod ◴[] No.46241903[source]
The main jump is from anything to M5; not because it's simply the latest but because it has matmul instructions similar to a CUDA GPU which fixes the slow prompt processing on all previous generation Apple Silicon chips.
replies(1): >>46270430 #
4. llbbdd ◴[] No.46270430[source]
I'm crying, man. I really don't want to set up a new laptop but you're making it hard.
replies(1): >>46271701 #
5. smcleod ◴[] No.46271701{3}[source]
The m5 max should be out around feb - wait till then