←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 1 comments | | HN request time: 0.196s | source
Show context
jas8425 ◴[] No.43965634[source]
If embeddings are roughly the equivalent of a hash at least insofar as they transform a large input into some kind of "content-addressed distillation" (ignoring the major difference that a hash is opaque whereas an embedding has intrinsic meaning), has there been any research done on "cracking" them? That is, starting from an embedding and working backwards to generate a piece of text that is semantically close by?

I could imagine an LLM inference pipeline where the next token ranking includes its similarity to the target embedding, or perhaps instead the change in direction towards/away from the desired embedding that adding it would introduce.

Put another way, the author gives the example:

> embedding("king") - embedding("man") + embedding("woman") ≈ embedding("queen")

What if you could do that but for whole bodies of text?

I'm imagining being able to do "semantic algebra" with whole paragraphs/articles/books. Instead of just prompting an LLM to "adjust the tone to be more friendly", you could have the core concept of "friendly" (or some more nuanced variant thereof) and "add" it to your existing text, etc.

replies(4): >>43965837 #>>43965882 #>>43965914 #>>43968887 #
luke-stanley ◴[] No.43965837[source]
"starting from an embedding and working backwards to generate a piece of text that is semantically close by?" Apparently this is called embedding inversion and Universal Zero-shot Embedding Inversion https://arxiv.org/abs/2504.00147 Going incrementally closer and closer to the target with some means to vary seems to be the most general way, there are lots of ways to be more optimal though. Image diffusion with CLIP embeddings and such is kinda related too.
replies(1): >>43983210 #
1. luke-stanley ◴[] No.43983210[source]
I meant to say: Apparently this is called "embedding inversion", and that "Universal Zero-shot Embedding Inversion" is a related paper that covers a lot of the basics. Recently I learned that a ArXiv RAG agent by ArXiv Labs is a really cool way for people wanting to find out about research: https://www.alphaxiv.org/assistant Though I had ran into "inversion" before, the AlphaXiv Assistant introduced me to "embedding inversion".