←back to thread

176 points nxa | 1 comments | | HN request time: 0.305s | source

I've been playing with embeddings and wanted to try out what results the embedding layer will produce based on just word-by-word input and addition / subtraction, beyond what many videos / papers mention (like the obvious king-man+woman=queen). So I built something that doesn't just give the first answer, but ranks the matches based on distance / cosine symmetry. I polished it a bit so that others can try it out, too.

For now, I only have nouns (and some proper nouns) in the dataset, and pick the most common interpretation among the homographs. Also, it's case sensitive.

Show context
lightyrs ◴[] No.43989014[source]
I don't get it but I'm not sure I'm supposed to.

    life + death = mortality
    life - death = lifestyle

    drug + time = occasion
    drug - time = narcotic

    art + artist + money = creativity
    art + artist - money = muse

    happiness + politics = contentment
    happiness + art      = gladness
    happiness + money    = joy
    happiness + love     = joy
replies(2): >>43989096 #>>43989577 #
1. grey-area ◴[] No.43989096[source]
Does the system you’re querying ‘get it’? From the answers it doesn’t seem to understand these words or their relations. Once in a while it’ll hit on something that seems to make sense.