←back to thread

548 points tifa2up | 6 comments | | HN request time: 0.02s | source | bottom
1. 383toast ◴[] No.45646734[source]
They should've tested other embedding models, there are better ones than openai's (and cheaper)
replies(1): >>45646823 #
2. prettyblocks ◴[] No.45646823[source]
Which do you suggest?
replies(2): >>45646987 #>>45647899 #
3. roze_sha ◴[] No.45646987[source]
https://huggingface.co/spaces/mteb/leaderboard
replies(2): >>45647106 #>>45649118 #
4. 383toast ◴[] No.45647106{3}[source]
yep
5. leftnode ◴[] No.45647899[source]
The Qwen3 600M and 4B embedding models are near state of the art and aren't too computationally intensive.
6. remz14 ◴[] No.45649118{3}[source]
You should use RTEB instead. See here for why: https://huggingface.co/blog/rteb

Here is that leaderboard https://huggingface.co/spaces/mteb/leaderboard?benchmark_nam...

Voyage-3-large seems like SOTA right now