←back to thread

147 points fzliu | 2 comments | | HN request time: 0.428s | source
1. lunarmony ◴[] No.45071794[source]
Researchers have discussed limitations of vector-based retrieval from a rank perspective in various forms for a few years. It's further been shown that better alternative exists; some low-rank approaches can theoretically approximate arbitrary high-rank distribution while permitting MIPS-level efficient inference (see e.g., Retrieval with Learned Similarities, https://arxiv.org/abs/2407.15462). Such solutions are already being used in production at Meta and at LinkedIn.
replies(1): >>45072315 #
2. yorwba ◴[] No.45072315[source]
I don't think Mixture of Logits from the paper you link circumvents the theoretical limitations pointed out here, since their dataset size mostly stays well below the limit.

In the end they still rely on Maximum Inner Product Search, just with several lookups for smaller partitions of the full embedding, and the largest dataset is Books, where this paper suggests you'd need more than 512 embedding dimensions, and MoL with 256-dimensional embeddings split into 8 parts of 32 each has an abysmal hit rate.

So that's hardly a demonstration that arbitrary high-rank distributions can be approximated well. MoL seems to approximate it better than other approaches, but all of them are clearly hampered by the small embedding size.