←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 1 comments | | HN request time: 0.319s | source
Show context
tyho ◴[] No.43964392[source]
> The 2D map analogy was a nice stepping stone for building intuition but now we need to cast it aside, because embeddings operate in hundreds or thousands of dimensions. It’s impossible for us lowly 3-dimensional creatures to visualize what “distance” looks like in 1000 dimensions. Also, we don’t know what each dimension represents, hence the section heading “Very weird multi-dimensional space”.5 One dimension might represent something close to color. The king - man + woman ≈ queen anecdote suggests that these models contain a dimension with some notion of gender. And so on. Well Dude, we just don’t know.

nit. This suggests that the model contains a direction with some notion of gender, not a dimension. Direction and dimension appear to be inextricably linked by definition, but with some handwavy maths, you find that the number of nearly orthogonal dimensions within n dimensional space is exponential with regards to n. This helps explain why spaces on the order of 1k dimensions can "fit" billions of concepts.

replies(12): >>43964509 #>>43964649 #>>43964659 #>>43964705 #>>43964934 #>>43965081 #>>43965183 #>>43965258 #>>43965725 #>>43965971 #>>43966531 #>>43967165 #
kaycebasques ◴[] No.43964509[source]
Oh yes, this makes a lot of sense, thank you for the "nit" (which doesn't feel like a nit to me, it feels like an important conceptual correction). When I was writing the post I definitely paused at that part, knowing that something was off about describing the model as having a dimension that maps to gender. As you said, since the models are general-purpose and work so well in so many domains, there's no way that there's a 1-to-1 correspondence between concepts and dimensions.

I think your comment is also clicking for me now because I previously did not really understand how cosine similarity worked, but then watched videos like this and understand it better now: https://youtu.be/e9U0QAFbfLI

I will eventually update the post to correct this inaccuracy, thank you for improving my own wetware's conceptual model of embeddings

replies(3): >>43964692 #>>43965033 #>>43965713 #
1. manmal ◴[] No.43965713[source]
This video explains the direction-encodes-trait topic very well IMO: https://youtu.be/wjZofJX0v4M

It’s the first in a series of three that I can very highly recommend.

> there's no way that there's a 1-to-1 correspondence between concepts and dimensions.

I don’t know about that! Once you go very high dimensional, there is a lot of direction vectors that are almost perfectly perpendicular to each other (meaning they can cleanly encode a trait). Maybe they don’t even need to be perfectly perpendicular, the dot product just needs to be very close to zero.