←back to thread

548 points tifa2up | 5 comments | | HN request time: 0.293s | source
Show context
383toast ◴[] No.45646734[source]
They should've tested other embedding models, there are better ones than openai's (and cheaper)
replies(1): >>45646823 #
1. prettyblocks ◴[] No.45646823[source]
Which do you suggest?
replies(2): >>45646987 #>>45647899 #
2. roze_sha ◴[] No.45646987[source]
https://huggingface.co/spaces/mteb/leaderboard
replies(2): >>45647106 #>>45649118 #
3. 383toast ◴[] No.45647106[source]
yep
4. leftnode ◴[] No.45647899[source]
The Qwen3 600M and 4B embedding models are near state of the art and aren't too computationally intensive.
5. remz14 ◴[] No.45649118[source]
You should use RTEB instead. See here for why: https://huggingface.co/blog/rteb

Here is that leaderboard https://huggingface.co/spaces/mteb/leaderboard?benchmark_nam...

Voyage-3-large seems like SOTA right now