←back to thread

204 points tdchaitanya | 6 comments | | HN request time: 1.15s | source | bottom
Show context
fny ◴[] No.45094906[source]
Is there a reason human preference data is even needed? Don't LLMs already have a strong enough notion of question complexity to build a dataset for routing?
replies(3): >>45094974 #>>45095189 #>>45101110 #
1. delichon ◴[] No.45094974[source]
> a strong enough notion of question complexity

Aka Wisdom. No, LLMs don't have that. Me neither, I usually have to step in the rabbit holes in order to detect them.

replies(1): >>45095394 #
2. fny ◴[] No.45095394[source]
"Do you think you need to do high/medium/low amount of thinking to answer X?" seems well within an LLMs wheelhouse if the goal is to build an optimized routing engine.
replies(1): >>45095871 #
3. nutjob2 ◴[] No.45095871[source]
How do you think that an LLM could come by that information? Do you think that LLM vendors are logging performance and feeding that back into the model or some other mechanism?
replies(3): >>45096007 #>>45096296 #>>45096334 #
4. ◴[] No.45096007{3}[source]
5. carlhjerpe ◴[] No.45096296{3}[source]
Yes, that's why they keep getting better and why Anthropic is switching privacy policy defaults to eat my data please.
6. adtac ◴[] No.45096334{3}[source]
Why not something dumb like this: https://chatgpt.com/share/68b60199-b6ac-8009-b50d-3e7cfff1d7... (gpt-4o)