←back to thread

1311 points msoad | 2 comments | | HN request time: 0s | source
Show context
bsaul ◴[] No.35393916[source]
how is llama performance relative to chatgpt ? is it as good as chatgpt3 or even 4 ?
replies(1): >>35394022 #
terafo ◴[] No.35394022[source]
It is as good as GPT-3 at most sizes. Instruct layer needs to be put on top in order for it to compete with GPT 3.5(which powers ChatGPT). It can be done with comparatively little amount of compute(couple hundred bucks worth of compute for small models, I'd assume low thousands for 65B).
replies(1): >>35394938 #
1. stavros ◴[] No.35394938[source]
That's surprising to read, given that ChatGPT (at least the first version) was much worse than text-davinci-003 at following instructions. The new version seems to be much better, though.
replies(1): >>35396955 #
2. weird-eye-issue ◴[] No.35396955[source]
No, it wasn't