I've run smaller models (I forget which ones, this was about a year ago) on my laptop just to see what happened. I was quite surprised that I could get it to write simple Python programs. Actually very surprised which led me to re-evaluate my thinking on LLMs in general. Anyway, since then I've been using the regular hosted services since for now I don't see a worthwhile tradeoff running models locally. Apart from the hardware needed, I'd expect to be constantly downloading O(100G) model files as they improve on a weekly basis. I don't have the internet capacity to easily facilitate that.