←back to thread

559 points Gricha | 1 comments | | HN request time: 0.2s | source
Show context
elzbardico ◴[] No.46233707[source]
LLMs have this strong bias towards generating code, because writing code is the default behavior from pre-training.

Removing code, renaming files, condensing, and other edits is mostly a post-training stuff, supervised learning behavior. You have armies of developers across the world making 17 to 35 dollars an hour solving tasks step by step which are then basically used to generate prompt/responses pairs of desired behavior for a lot of common development situations, adding desired output for things like tool calling, which is needed for things like deleting code.

A typical human working on post-training dataset generation task would involve a scenario like: given this Dockerfile for a python application, when we try to run pytest it fails with exception foo not found. The human will notice that package foo is not installed, change the requirements.txt file and write this down, then he will try pip install, and notice that the foo package requires a certain native library to be installed. The final output of this will be a response with the appropriate tool calls in a structured format.

Given that the amount of unsupervised learning is way bigger than the amount spent on fine-tuning for most models, it is not surprise that given any ambiguous situation, the model will default to what it knows best.

More post-training will usually improve this, but the quality of the human generated dataset probably will be the upper bound of the output quality, not to mention the risk of overfitting if the foundation model labs embrace SFT too enthusiastically.

replies(1): >>46235033 #
hackernewds ◴[] No.46235033[source]
> Writing code is the default behavior from pre-training

what does this even mean? could you expand on it

replies(2): >>46235667 #>>46239575 #
bongodongobob ◴[] No.46235667[source]
He means that it is heavily biased to write code, not remove, condense, refactor, etc. It wants to generate more stuff, not less.
replies(2): >>46236717 #>>46240123 #
snet0 ◴[] No.46236717[source]
I don't see why this would be the case.
replies(2): >>46238248 #>>46240418 #
1. bunderbunder ◴[] No.46238248[source]
It’s because that’s what most resembles the bulk of the tasks it was being optimized for during pre-training.