←back to thread

602 points emrah | 1 comments | | HN request time: 0.21s | source
Show context
Samin100 ◴[] No.43746210[source]
I have a few private “vibe check” questions and the 4 bit QAT 27B model got them all correctly. I’m kind of shocked at the information density locked in just 13 GB of weights. If anyone at Deepmind is reading this — Gemma 3 27B is the single most impressive open source model I have ever used. Well done!
replies(1): >>43748557 #
itake ◴[] No.43748557[source]
I tried to use the -it models for translation, but it completely failed at translating adult content.

I think this means I either have to train the -pt model with my own instruction tuning or use another provider :(

replies(2): >>43749007 #>>43749374 #
jychang ◴[] No.43749374[source]
Try mradermacher/amoral-gemma3-27B-v2-qat-GGUF
replies(1): >>43750435 #
1. itake ◴[] No.43750435[source]
My current architecture is an on-device model for fast translation and then replace that with a slow translation (via an API call) when its ready.

24b would be too small to run on device and I'm trying to keep my cloud costs low (meaning I can't afford to host a small 27b 24/7).