←back to thread

684 points prettyblocks | 1 comments | | HN request time: 0.253s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
Show context
flippyhead ◴[] No.42785739[source]
I have a tiny device that listens to conversations between two people or more and constantly tries to declare a "winner"
replies(14): >>42785781 #>>42785791 #>>42785949 #>>42785970 #>>42785979 #>>42786455 #>>42786672 #>>42787108 #>>42788174 #>>42788937 #>>42789840 #>>42791711 #>>42807514 #>>42890452 #
1. mkaic ◴[] No.42787108[source]
This reminds me of the antics of streamer DougDoug, who often uses LLM APIs to live-summarize, analyze, or interact with his (often multi-thousand-strong) Twitch chat. Most recently I saw him do a GeoGuessr stream where he had ChatGPT assume the role of a detective who must comb through the thousands of chat messages for clues about where the chat thinks the location is, then synthesizes the clamor into a final guess. Aside from constantly being trolled by people spamming nothing but "Kyoto, Japan" in chat, it occasionaly demonstrated a pretty effective incarnation of "the wisdom of the crowd" and was strikingly accurate at times.