←back to thread

168 points 1wheel | 1 comments | | HN request time: 0.203s | source
Show context
bjterry ◴[] No.40434403[source]
It would be interesting to allow users of models to customize inference by tweaking these features, sort of like a semantic equalizer for LLMs. My guess is that this wouldn't work as well as fine-tuning, since that would tweak all the features at once toward your use case, but the equalizer would require zero training data.

The prompt itself can trigger the features, so if you say "Try to weave in mentions of San Francisco" the San Francisco feature will be more activated in the response. But having a global equalizer could reduce drift as the conversation continued, perhaps?

replies(2): >>40435401 #>>40436786 #
ericflo ◴[] No.40435401[source]
Related: https://vgel.me/posts/representation-engineering/
replies(1): >>40442805 #
1. bjterry ◴[] No.40442805[source]
Thanks!