Start time was also a huge issue with building anything usable, so I'm glad to see that being worked on. There's potential here, but I'm still waiting on more direct API/calling access. Context size is also a little bit of a problem. I think categorization is a potentially great use, but without additional alignment training and with the context size fairly low, I had trouble figuring out where I could make use of tagging/summarizing.
So in general, as it stands I had a lot of trouble figuring out what I could personally build with this that would be genuinely useful to run locally and where it wouldn't be preferable to build a separate tool that didn't use AI at all. But I'm very excited to see it continue to get optimized; I think locally running models are very important right now.