←back to thread

343 points sillysaurusx | 2 comments | | HN request time: 0.509s | source
Show context
bitL ◴[] No.35028540[source]
How does LLaMA handle fast fine-tuning? Are they using transformer adapters for it?
replies(1): >>35031006 #
1. loufe ◴[] No.35031006[source]
It's already been adapted for hugging face transformers[1]. Apparently that should unlock its full potential. Oobabooga integrated the change into text-generation-webui[2] meaning we can already access a large chunk of its potential (from what I understand).

[1] https://github.com/huggingface/transformers/pull/21955

[2] https://github.com/oobabooga/text-generation-webui/commit/90...

replies(1): >>35032636 #
2. bitL ◴[] No.35032636[source]
That's absolutely fantastic! Thanks for the links!