Edit: I've loaded llama 3.1 8b instruct GGUF and I got 12.61 tok/sec and 80tok/sec for 3.2 3b.
However I've found quality of smaller models to be quite lacking. The Llama 3.2 3B for example is much worse than Gemma2 9B, which is the one I found performs best while fitting comfortably.
Actual sentences are fine, but it doesn't follow prompts as well and it doesn't "understand" the context very well.
Quantization brings down memory cost, but there seems to be a sharp decline below 5 bits for those I tried. So a larger but heavily quantized model usually performs worse, at least with the models I've tried so far.
So with only 6GB of GPU memory I think you either have to accept the hit on inference speed by only partially offloading, or accept fairly low model quality.
Doesn't mean the smaller models can't be useful, but don't expect ChatGPT 4o at home.
That said if you got a beefy CPU then it can be reasonable to have it do a few of the layers.
Personally I found Gemma2 9B quantized to 6 bit IIRC to be quite useful. YMMV.
Testing performance this way, I got about 0.5-1.5 tokens per second with an 8GB 4bit quantized model on an old DL360 rack-mount server with 192GB RAM and 2 E5-2670 CPUs. I got about 20-50 tokens per second on my laptop with a mobile RTX 4080.
I use a Tesla P4 for ML stuff at home, it's equivalent to a 1080 Ti, and has a score of 7.1. A 2070 (they don't list the "super") is a 7.5.
For reference, 4060 Ti, 4070 Ti, 4080 and 4090 are 8.9, which is the highest score for a gaming graphics card.
I tried gemma-2-27b-it-Q4_K_L but it's not as good, despite being larger.
Using llama.cpp and models from here[1].