At this point, the onus is on the developer to prove it's value through AB comparisons versus traditional RAG. No person/team has the bandwidth to try out this (n + 1) solution.
At this point, the onus is on the developer to prove it's value through AB comparisons versus traditional RAG. No person/team has the bandwidth to try out this (n + 1) solution.
The pace at which things are moving, likely none. You will have to keep making changes as and when you see newer things. One thing in your favor (arguably) is that every technique is very dependent on the dataset and problem you are solving. So, if you do not have the latest one implemented, you would be okay, as long as your evals and metrics are good. So, if this helps, skip the details, understand the basics, and go for your own implementation. One thing to look out for is new SOTA LLM releases, and the jumps in capability. Eg: 4o did not announce it, but they started doing very well on vision. (GPT-4 was okay, 4o is empirically quite better). These things help when you update your pipeline.
It’s not hard for a product to swap the underlying LLM for a given task.
Then Google did.
Then llava.
The issue is that this technology has no most (other than the cost to create models and datasets)
There’s not a lot of secret sauce you can use that someone else can’t trivially replicate, given the resources.
It’s going to come down to good ol product design and engineering.
The issue is openai doesn’t seem to care about what their users want. (I don’t think their users know what they want either, but that’s another discussion)
They want more money to make bigger models in the hope that nobody else can or will.
They want to achieve regulatory capture as their moat.
For all their technical abilities at scaling LLM training and inference, I don’t get the feeling that they have great product direction.