←back to thread

168 points 1wheel | 1 comments | | HN request time: 0.311s | source
Show context
bjterry ◴[] No.40434403[source]
It would be interesting to allow users of models to customize inference by tweaking these features, sort of like a semantic equalizer for LLMs. My guess is that this wouldn't work as well as fine-tuning, since that would tweak all the features at once toward your use case, but the equalizer would require zero training data.

The prompt itself can trigger the features, so if you say "Try to weave in mentions of San Francisco" the San Francisco feature will be more activated in the response. But having a global equalizer could reduce drift as the conversation continued, perhaps?

replies(2): >>40435401 #>>40436786 #
1. kromem ◴[] No.40436786[source]
At least for right now this approach would in most cases still be like using a shotgun instead of a scalpel.

Over the next year or so I'm sure it will refine enough to be able to be more like a vector multiplier on activation, but simply flipping it on in general is going to create a very 'obsessed' model as stated.