←back to thread

548 points tifa2up | 1 comments | | HN request time: 0s | source
Show context
leetharris ◴[] No.45646303[source]
Embedding based RAG will always just be OK at best. It is useful for little parts of a chain or tech demos, but in real life use it will always falter.
replies(6): >>45646470 #>>45646482 #>>45646495 #>>45646758 #>>45646892 #>>45656450 #
esafak ◴[] No.45646482[source]
Compared with what?
replies(1): >>45647936 #
leetharris ◴[] No.45647936[source]
Full text agentic retrieval. Instead of cosine similarity on vectors, parsing metadata through an agentic loop.

To give a real world example, the way Claude Code works versus how Cursor's embedded database works.

replies(1): >>45648797 #
lifty ◴[] No.45648797[source]
How do you do that on 5 million documents?
replies(1): >>45655545 #
1. leetharris ◴[] No.45655545[source]
People are usually not querying across 5 million documents in a single scope.

If you want something as simple as "suggest similar tweets" or something across millions of things then embeddings still work.

But if you want something like "compare the documents across these three projects" then you would use full text metadata extraction. Keywords, summaries, table of contents, etc to determine data about each document and each chunk.