←back to thread

899 points georgehill | 1 comments | | HN request time: 0.28s | source
Show context
samwillis ◴[] No.36216196[source]
ggml and llama.cpp are such a good platform for local LLMs, having some financial backing to support development is brilliant. We should be concentrating as much as possible to do local inference (and training) based on privet data.

I want a local ChatGPT fine tuned on my personal data running on my own device, not in the cloud. Ideally open source too, llama.cpp is looking like the best bet to achieve that!

replies(6): >>36216377 #>>36216465 #>>36216508 #>>36217604 #>>36217847 #>>36221973 #
shostack ◴[] No.36221973[source]
I've been trying to figure out what I might need to do in order to turn my Obsidian vault into a dataset to fine tune against. I'd invest a lot more into it now if I thought it would be a key to an AI learning about my the way it does in the movie Her.
replies(2): >>36222384 #>>36384485 #