←back to thread

577 points simonw | 2 comments | | HN request time: 0.001s | source
Show context
stpedgwdgfhgdd ◴[] No.44723879[source]
Aside that space invaders from scratch is not representative for real engineering, it will be interesting to see what the business model for Anthropic will be if I can run a solid code generation model on my local machine (no usage tier per hour or week), let’s say, one year from now. At $200 per month for 2 years I can buy a decent Mx with 64GB (or perhaps even 128GB taking residual value into account)
replies(5): >>44724300 #>>44724450 #>>44724558 #>>44724731 #>>44724993 #
rafaelmn ◴[] No.44724450[source]
What about power used and support hardware ? Also card going down means you are down until you get warranty service.
replies(1): >>44725070 #
1. skeezyboy ◴[] No.44725070[source]
why are you doing anything locally then?
replies(1): >>44731841 #
2. rafaelmn ◴[] No.44731841[source]
Latency and tooling support ? UX of cloud based LLM vs local is much better for the cloud option - not so much for dev tooling.

I tried using remote workstations - I am not a fan of lugging a beefy client machine to do my work - would much rather use something thats super light and power efficient.