←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 1 comments | | HN request time: 0.198s | source
Show context
jas8425 ◴[] No.43965634[source]
If embeddings are roughly the equivalent of a hash at least insofar as they transform a large input into some kind of "content-addressed distillation" (ignoring the major difference that a hash is opaque whereas an embedding has intrinsic meaning), has there been any research done on "cracking" them? That is, starting from an embedding and working backwards to generate a piece of text that is semantically close by?

I could imagine an LLM inference pipeline where the next token ranking includes its similarity to the target embedding, or perhaps instead the change in direction towards/away from the desired embedding that adding it would introduce.

Put another way, the author gives the example:

> embedding("king") - embedding("man") + embedding("woman") ≈ embedding("queen")

What if you could do that but for whole bodies of text?

I'm imagining being able to do "semantic algebra" with whole paragraphs/articles/books. Instead of just prompting an LLM to "adjust the tone to be more friendly", you could have the core concept of "friendly" (or some more nuanced variant thereof) and "add" it to your existing text, etc.

replies(4): >>43965837 #>>43965882 #>>43965914 #>>43968887 #
jerjerjer ◴[] No.43965882[source]
> If embeddings are roughly the equivalent of a hash

Embeddings are roughly the equivalent of fuzzy hashes.

replies(1): >>43966297 #
quantadev ◴[] No.43966297[source]
A hash is a way of mapping a data array to a more compact representation that only has one output with the attribute of uniqueness and improbability of collision. This is the opposite of what embeddings are for, and what they do.

Embeddings are a way of mapping a data array to a different (and yes smaller) data array, but the goal is not to compress into one thing, but to spread out into an array of output, where each element of the output has meaning. Embeddings are the exact opposite of hashes.

Hashes destroy meaning. Embeddings create meaning. Hashes destroy structure in space. Embeddings create structures in space.

replies(1): >>43966629 #
nighthawk454 ◴[] No.43966629[source]
A hash function in general is only a function that maps input to a fixed-length output. So embeddings are hash functions.

You’re probably thinking of cryptographic hashes, where avoiding collisions is important. But it’s not intrinsic. For example, Locality Sensitive Hashing where specific types of collisions are encouraged.

replies(2): >>43967339 #>>43967562 #
1. ◴[] No.43967339[source]