←back to thread

255 points tbruckner | 2 comments | | HN request time: 0.793s | source
Show context
randomopining ◴[] No.37420598[source]
Is there any actual usecases to run this stuff on a local computer? Or are most of these models actually suited to run on remote clusters?
replies(3): >>37421417 #>>37421858 #>>37423070 #
logicchains ◴[] No.37421417[source]
The use-case is you want to generate pornographic, violence-depicting or politically-incorrect content, and would rather buy a powerful computer than rent a server (or you already own a powerful computer).
replies(2): >>37423085 #>>37440704 #
1. beardedwizard ◴[] No.37423085[source]
You what? You can run smaller and plenty powerful models on a m1 MacBook. Idk what the porn and violence angle is but maybe keep that one to yourself.
replies(1): >>37423271 #
2. logicchains ◴[] No.37423271[source]
One of the largest use-cases for local LLMs is NSFW chatbots, like DIY Replika, AI girl/boyfriends, as the hosted services are too censored to be used for this. Yes there are smaller models, but they're not as intelligent. Similarly people using LLMs as a writing aid need to use local ones if they're writing a story (or .e.g DnD campaign) involving violence, as the hosted ones are generally unwilling to narrate graphic violence, and the smarter the model, the better the story quality.

Given that censorship is one of the biggest complaints about the hosted LLMs, it should be no surprise that some of the main use-cases driving local LLMs are those involving creating content that censored LLMs are unwilling to create.