←back to thread

343 points sillysaurusx | 2 comments | | HN request time: 0.211s | source
Show context
linearalgebra45 ◴[] No.35028638[source]
It's been enough time since this leaked, so my question is why aren't there blog posts already of people blowing their $300 of starter credit with ${cloud_provider} on a few hours' experimentation running inference on this 65B model?

Edit: I read the linked README.

> I was impatient and curious to try to run 65B on an 8xA100 cluster

Well?

replies(2): >>35028936 #>>35030027 #
v64 ◴[] No.35028936[source]
The compute necessary to run 65B naively was only available on AWS (and perhaps Azure, I don't work with them) and the required instance types have been unavailable to the public recently (it seems everyone had the same idea to hop on this and try to run it). In my other post here [1], the memory requirements have been lowered through other work, and it should now be possible to run the 65B on a provider like CoreWeave.

[1] https://news.ycombinator.com/item?id=35028738

replies(2): >>35029106 #>>35029766 #
MacsHeadroom ◴[] No.35029766[source]
I'm running LLaMA-65B on a single A100 80GB with 8bit quantization. $1.5/hr on vast.ai
replies(7): >>35030000 #>>35030059 #>>35031427 #>>35136771 #>>35145917 #>>35189078 #>>35189095 #
youssefabdelm ◴[] No.35031427[source]
What's the speed like? How many tokens per second? / Is it as fast as say ChatGPT?
replies(1): >>35107419 #
1. MacsHeadroom ◴[] No.35107419[source]
It's about as fast as chatGPT when chatGPT first launched. Not as fast as the new "Turbo" version of chatGPT, but much faster than you or anyone can read (so I'm not sure the difference matters).
replies(1): >>35119867 #
2. youssefabdelm ◴[] No.35119867[source]
That's awesome! thanks!