←back to thread

250 points lewq | 1 comments | | HN request time: 0.2s | source
Show context
mark_l_watson ◴[] No.42138010[source]
After just spending 15 minutes trying to get something useful accomplished, anything useful at all, with latest beta Apple Intelligence with a M1 iPad Pro (16G RAM), this article appealed to me!

I have been running the 32B parameters qwen2.5-coder model on my 32G M2 Mac and and it is a huge help with coding.

The llama3.3-vision model does a great job processing screen shots. Small models like smollm2:latest can process a lot of text locally, very fast.

Open source front ends like Open WebUI are improving rapidly.

All the tools are lining up for do it yourself local AI.

The only commercial vendor right now that I think is doing a fairly good job at an integrated AI workflow is Google. Last month I had all my email directed to my gmail account, and the Gemini Advanced web app did a really good job integrating email, calendar, and google docs. Job well done. That said, I am back to using ProtonMail and trying to build local AIs for my workflows.

I am writing a book on the topic of local, personal, and private AIs.

replies(5): >>42138175 #>>42139063 #>>42140813 #>>42141201 #>>42142652 #
zerop ◴[] No.42139063[source]
Have you tried RAG on Open WebUI. How does it do in asking questions from source docs?
replies(1): >>42139613 #
1. mark_l_watson ◴[] No.42139613[source]
Not yet. It has ‘Knowledge sources’ that you can set up, and I think that supplies data for built in RAG - but I am not sure until I try it.