←back to thread

221 points whitefables | 5 comments | | HN request time: 0.56s | source
Show context
taosx ◴[] No.41856567[source]
For the people who self-host LLMs at home: what use cases do you have?

Personally, I have some notes and bookmarks that I'd like to scrape, then have an LLM summarize, generate hierarchical tags, and store in a database. For the notes part at least, I wouldn't want to give them to another provider; even for the bookmarks, I wouldn't be comfortable passing my reading profile to anyone.

replies(11): >>41856653 #>>41856701 #>>41856881 #>>41856970 #>>41856992 #>>41857395 #>>41858199 #>>41858353 #>>41861443 #>>41864562 #>>41890288 #
TechDebtDevin ◴[] No.41856881[source]
I keep an 8b running with ollama/openwebui to ask it to format things, summarization, and to generate SQL/simple bash commands and what not.
replies(1): >>41856966 #
worldsayshi ◴[] No.41856966[source]
So 8b is really smart enough to write scripts for you? How often does it fail?
replies(1): >>41857044 #
1. wokwokwok ◴[] No.41857044[source]
> So 8b is really smart enough to write scripts for you?

Depends on the model, but in general, no.

...but it's fine for simple 1 liner commands like "how do I revert my commit?" or "rename these files to camelcase".

> How often does it fail?

Immediately and constantly if you ask anything hard.

An 8b model is not chat-gpt. The 3B model in the OP post is not chat-gpt.

The capability compared to sonnet/4o is like a potato and a car.

Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank. They're generally not capable enough to use as a self hosted assistant.

replies(2): >>41857515 #>>41859155 #
2. worldsayshi ◴[] No.41857515[source]
I really hope we can get sonnet like performance down to single consumer level GPU someone soon. Maybe the hardware will get there before the models.
replies(1): >>41861683 #
3. lolinder ◴[] No.41859155[source]
> Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank.

This is not true. On benchmarks, maybe, but I find the LLM Arena more accurately accounts for the subjective experience of using these things, and Llama 3.1 8B ranks relatively high, outperforming GPT-3.5 and certain iterations of 4.

Where the 8Bs do struggle is that they don't have as deep a repository of knowledge, so using them without some form of RAG won't get you as good results as using a plain larger model. But frankly I'm not convinced that RAG-free chat is the future anyway, and 8B models are extremely fast and cheap to run. Combined with good RAG they can do very well.

replies(1): >>41899615 #
4. TechDebtDevin ◴[] No.41861683[source]
Well considering it probably takes several hundred GBs of VRAM to run inference for Claude its going to be a while.

But yes, like the guy above said it's really only helpful for one line commands. Like if I forgot some sort flag thats available for a certain type of command. Or random things I don't work with often enough to memorize their little build commands etc. It's not helpful for programming just simple commands.

It also can help with unstructured or messy data to make it more readable, although there's potential to hallucinate if the context is at all large.

5. wokwokwok ◴[] No.41899615[source]
All I can say is my experience is that this is the difference between wanting something to be true, and it actually being true.

> 8B models are extremely fast and cheap to run

yes.

> Combined with good RAG they can do very well.

This is simply not true. They perform at a level which is useful for simple, trivial tasks.

If you consider that 'doing well', then sure.

However, if, like the parent post, you want to be writing scripts, which is specifically what they asked... then: heck, what 8B are you using, because llama 3.1 is shit at it out of the box.

¯\_(ツ)_/¯

A working unit test can take 6 or 7 iterations with a good prompt. Forget writing logic. Creating classes? Using RAG to execute functions from a spec? Forget it.

That's not not the level that I need for an assistant.