←back to thread

684 points prettyblocks | 1 comments | | HN request time: 0.221s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
Show context
mettamage ◴[] No.42784724[source]
I simply use it to de-anonymize code that I typed in via Claude

Maybe should write a plugin for it (open source):

1. Put in all your work related questions in the plugin, an LLM will make it as an abstract question for you to preview and send it

2. And then get the answer with all the data back

E.g. df[“cookie_company_name”] becomes df[“a”] and back

replies(4): >>42784789 #>>42785696 #>>42785808 #>>42788777 #
1. sauwan ◴[] No.42785808[source]
Are you using the model to create a key-value pair to find/replace and then reverse to reanonymize, or are you using its outputs directly? If the latter, is it fast enough and reliable enough?