←back to thread

1311 points msoad | 1 comments | | HN request time: 0.326s | source
1. danShumway ◴[] No.35394222[source]
I messed around with 7B and 13B and they gave interesting results, although not quite consistent enough results for me to figure out what to do with them. I'm curious to try out the 30B model.

Start time was also a huge issue with building anything usable, so I'm glad to see that being worked on. There's potential here, but I'm still waiting on more direct API/calling access. Context size is also a little bit of a problem. I think categorization is a potentially great use, but without additional alignment training and with the context size fairly low, I had trouble figuring out where I could make use of tagging/summarizing.

So in general, as it stands I had a lot of trouble figuring out what I could personally build with this that would be genuinely useful to run locally and where it wouldn't be preferable to build a separate tool that didn't use AI at all. But I'm very excited to see it continue to get optimized; I think locally running models are very important right now.