←back to thread

164 points ksec | 1 comments | | HN request time: 0.3s | source
1. PaulRobinson ◴[] No.44500056[source]
If we're getting almost-comparable results from models that can run locally (especially on dedicated silicon that we know Apple are good at designing and shipping at scale), I think we might hit a turning point soon. This was an intern project, so hopefully they'll now push a bit more resource at it: a 7b param model on my laptop that is getting near Gemini or Claude standards is going to win my money, every day.

On a purely tech point, I'm not working very near the cutting edge on AI research, but hadn't realised so much had been done - and was possible - with diffusion models for text. Will need to dig into that, as it looks fascinating.