←back to thread

548 points tifa2up | 1 comments | | HN request time: 0.2s | source
Show context
daemonologist ◴[] No.45646275[source]
I concur:

The big LLM-based rerankers (e.g. Qwen3-reranker) are what you always wanted your cross-encoder to be, and I highly recommend giving them a try. Unfortunately they're also quite computationally expensive.

Your metadata/tabular data often contains basic facts that a human takes for granted, but which aren't repeated in every text chunk - injecting it can help a lot in making the end model seem less clueless.

The point about queries that don't work with simple RAG (like "summarize the most recent twenty documents") is very important to keep in mind. We made our UI very search-oriented and deemphasized the chat, to try to communicate to users that search is what's happening under the hood - the model only sees what you see.

replies(2): >>45646877 #>>45647595 #
1. agentcoops ◴[] No.45647595[source]
I agree completely with your point, especially the difficulty of developing the user's mental model for what's going on with context and the need to move away from chat UX. It's interesting that there are still few public examples of non-chat UIs that make context management explicit. It's possible that the big names tried this and decided it wasn't worth it -- but from comments here it seems like everyone that has built a production RAG system has come to the opposite conclusion. I'm guessing the real reason is otherwise: likely for the consumer apps controlling context (especially for free users) and inference time is one of the main levers for cost management at scale. Private RAGs, on the other hand, are more concerned with maximizing result quality and minimizing time spent by employee on a particular problem with cost per query much less of a concern --- that's been my experience at least.