Is the 30B model clearly better than the 7B?
I played with Pi3141/alpaca-lora-7B-ggml two days ago and it was super disappointing. In percentage between 0% = alpaca-lora-7B-ggml and 100% GPT-3.5, where would LLaMA 30B be positioned?
replies(2):
I played with Pi3141/alpaca-lora-7B-ggml two days ago and it was super disappointing. In percentage between 0% = alpaca-lora-7B-ggml and 100% GPT-3.5, where would LLaMA 30B be positioned?