←back to thread

340 points agomez314 | 2 comments | | HN request time: 0.418s | source
Show context
thwayunion ◴[] No.35245821[source]
Absolutely correct.

We already know this is about self-driving cars. Passing a driver's test was already possible in 2015 or so, but SDCs clearly aren't ready for L5 deployment even today.

There are also a lot of excellent examples of failure modes in object detection benchmarks.

Tests, such as driver's tests or standardized exams, are designed for humans. They make a lot of entirely implicit assumptions about failure modes and gaps in knowledge that are uniquely human. Automated systems work differently. They don't fail in the same way that humans fail, and therefore need different benchmarks.

Designing good benchmarks that probe GPT systems for common failure modes and weaknesses is actually quite difficult. Much more difficult than designing or training these systems, IME.

replies(12): >>35245981 #>>35246141 #>>35246208 #>>35246246 #>>35246355 #>>35246446 #>>35247376 #>>35249238 #>>35249439 #>>35250684 #>>35251205 #>>35252879 #
dcolkitt ◴[] No.35246141[source]
I'd also add that the almost all standardized tests are designed for introductory material across millions of people. That kind of information is likely to be highly represented in the training corpus. Whereas most jobs require highly specialized domain knowledge that's probably not well represented in the corpus, and probably too expansive to fit into the context window.

Therefore standardized tests are probably "easy mode" for GPT, and we shouldn't over-generalize its performance there to its ability to actually add economic value in actually economically useful jobs. Fine-tuning is maybe a possibility, but its expensive and fragile, and I don't think its likely that every single job is going to get a fine-tuned version of GPT.

replies(2): >>35246365 #>>35246438 #
Tostino ◴[] No.35246365[source]
From what i've gathered, fine tuning should be used to train the model on a task, such as: "the user asks a question, please provide an answer or follow up with more questions for the user if there are unfamiliar concepts."

Fine tuning should not be used to attempt to impart knowledge that didn't exist in the original training set, as it is just the wrong tool for the job.

Knowledge graphs and vector similarity search seem like the way forward for building a corpus of information that we can search and include within the context window for the specific question a user is asking without changing the model at all. It can also allow keeping only relevant information within the context window when the user wants to change the immediate task/goal.

Edit: You could think of it a little bit like the LLM as an analog to the CPU in a Von Neumann architecture and the external knowledge graph or vector database as RAM/Disk. You don't expect the CPU to be able to hold all the context necessary to complete every task your computer does; it just needs enough to store the complete context of the task it is working on right now.

replies(2): >>35247310 #>>35248711 #
1. visarga ◴[] No.35248711[source]
There can be foot guns in the retrieval approach. Yes, you keep the model fixed and only add new data to your index, then you allow the model to query the index. But when the model gets two snippets from different documents it might combine information between them even when it doesn't make sense. The model has a lack of context when it just retrieves random things based on search.
replies(1): >>35289798 #
2. Tostino ◴[] No.35289798[source]
Yeah, honestly I see using a regular search index as a downside rather than benefit with this tech. Conflicting info, or low quality blogspam seem to trip these LLMs up pretty bad.

Using curated search index seems like a much better use case, especially for private data (company info, docs, db schemas, code, chat logs, etc)