Most active commenters
  • mettamage(5)

←back to thread

684 points prettyblocks | 15 comments | | HN request time: 1.861s | source | bottom

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
1. mettamage ◴[] No.42784724[source]
I simply use it to de-anonymize code that I typed in via Claude

Maybe should write a plugin for it (open source):

1. Put in all your work related questions in the plugin, an LLM will make it as an abstract question for you to preview and send it

2. And then get the answer with all the data back

E.g. df[“cookie_company_name”] becomes df[“a”] and back

replies(4): >>42784789 #>>42785696 #>>42785808 #>>42788777 #
2. politelemon ◴[] No.42784789[source]
Could you recommend a tiny language model I could try out locally?
replies(1): >>42784953 #
3. mettamage ◴[] No.42784953[source]
Llama 3.2 has about 3.2b parameters. I have to admit, I use bigger ones like phi-4 (14.7b) and Llama 3.3 (70.6b) but I think Llama 3.2 could do de-anonimization and anonimization of code
replies(2): >>42785057 #>>42785333 #
4. OxfordOutlander ◴[] No.42785057{3}[source]
+1 this idea. I do the same. Just do it locally using ollama, also using 3.2 3b
5. RicoElectrico ◴[] No.42785333{3}[source]
Llama 3.2 punches way above its weight. For general "language manipulation" tasks it's good enough - and it can be used on a CPU with acceptable speed.
replies(1): >>42785773 #
6. sitkack ◴[] No.42785696[source]
So you are using a local small model to remove identifying information and make the question generic, which is then sent to a larger model? Is that understanding correct?

I think this would have some additional benefits of not confusing the larger model with facts it doesn't need to know about. My erasing information, you can allow its attention heads to focus on the pieces that matter.

Requires further study.

replies(1): >>42790194 #
7. seunosewa ◴[] No.42785773{4}[source]
How many tokens/s?
replies(1): >>42792310 #
8. sauwan ◴[] No.42785808[source]
Are you using the model to create a key-value pair to find/replace and then reverse to reanonymize, or are you using its outputs directly? If the latter, is it fast enough and reliable enough?
9. sundarurfriend ◴[] No.42788777[source]
You're using it to anonymize your code, not de-anonymize someone's code. I was confused by your comment until I read the replies and realized that's what you meant to say.
replies(1): >>42790197 #
10. mettamage ◴[] No.42790194[source]
> So you are using a local small model to remove identifying information and make the question generic, which is then sent to a larger model? Is that understanding correct?

Yep that's it

11. kreyenborgi ◴[] No.42790197[source]
I read it the other way, their code contains eg fetch(url, pw:hunter123), and they're asking Claude anonymized questions like "implement handler for fetch(url, {pw:mycleartrxtpw})"

And then claude replies

fetch(url, {pw:mycleartrxtpw}).then(writething)

And then the local llm converts the placeholder mycleartrxtpw into hunter123 using its access to the real code

replies(2): >>42792369 #>>42793460 #
12. iamnotagenius ◴[] No.42792310{5}[source]
10-15t/s on 12400 with ddr5
13. sundarurfriend ◴[] No.42792369{3}[source]
> Put in all your work related questions in the plugin, an LLM will make it as an abstract question for you to preview and send it

So the LLM does both the anonymization into placeholders and then later the replacing of the placeholders too. Calling the latter step de-anonymization is confusing though, it's "de-anonymizing" yourself to yourself. And the overall purpose of the plugin is to anonymize OP to Claude, so to me at least that makes the whole thing clearer.

replies(1): >>42793493 #
14. mettamage ◴[] No.42793460{3}[source]
It's that yea

Flow would be:

1. Llama prompt: write a console log statement with my username and password: mettamage, superdupersecret

2. Claude prompt (edited by Llama): write a console log statement with my username and password: asdfhjk, sdjkfa

3. Claude replies: console.log('asdfhjk', 'sdjkfa')

4. Llama gets that input and replies to me: console.log('mettamage', 'superdupersecret')

15. mettamage ◴[] No.42793493{4}[source]
I could've been a bit more clear, sorry about that.