←back to thread

1311 points msoad | 1 comments | | HN request time: 0.202s | source
Show context
qwertox ◴[] No.35393996[source]
Is the 30B model clearly better than the 7B?

I played with Pi3141/alpaca-lora-7B-ggml two days ago and it was super disappointing. In percentage between 0% = alpaca-lora-7B-ggml and 100% GPT-3.5, where would LLaMA 30B be positioned?

replies(2): >>35394629 #>>35395773 #
1. Rzor ◴[] No.35394629[source]
I haven't been able to run it myself yet, but according to what I read so far from people who did, the 30B model is where the "magic" starts to happen.