←back to thread

216 points veggieroll | 1 comments | | HN request time: 0s | source
Show context
xnx ◴[] No.41860534[source]
Has anyone put together a good and regularly updated decision tree for what model to use in different circumstances (VRAM limitations, relative strengths, licensing, etc.)? Given the enormous zoo of models in circulation, there must be certain models that are totally obsolete.
replies(3): >>41860656 #>>41860757 #>>41861253 #
leetharris ◴[] No.41860656[source]
People keep making these, but they become outdated so fast and nobody keeps up with it. If your definition of "great" changes in 6 months because a new model shatters your perception of "great," it's hard to rescore legacy models.

I'd say keeping up with the reddit LocalLLama community is the "easiest" way and it's by no means easy.

replies(2): >>41861614 #>>41868121 #
1. kergonath ◴[] No.41868121[source]
> I'd say keeping up with the reddit LocalLLama community is the "easiest" way and it's by no means easy.

The subreddit is… not great. It’s a decent way of keeping up, but don’t read the posts too much (and even then, there is a heavy social aspect, and the models that are discussed there are a very specific subset of what’s available). There is a lot of groupthink, the discussions are never rigorous. Most of the posts are along the lines of “I tested a benchmark and it is 0.5 points ahead of Llama-whatever on that one benchmark I made up, therefore it’s the dog’s and everything else is shite”. The Zuckerberg worshiping is also disconcerting. Returns diminish quickly as you spend more time on that subreddit.