←back to thread

176 points nxa | 1 comments | | HN request time: 0.2s | source

I've been playing with embeddings and wanted to try out what results the embedding layer will produce based on just word-by-word input and addition / subtraction, beyond what many videos / papers mention (like the obvious king-man+woman=queen). So I built something that doesn't just give the first answer, but ranks the matches based on distance / cosine symmetry. I polished it a bit so that others can try it out, too.

For now, I only have nouns (and some proper nouns) in the dataset, and pick the most common interpretation among the homographs. Also, it's case sensitive.

Show context
nikolay ◴[] No.43988786[source]
Really?!

  man - brain = woman
  woman - brain = businesswoman
replies(6): >>43988818 #>>43988887 #>>43988910 #>>43988964 #>>43988972 #>>43989276 #
nxa ◴[] No.43988964[source]
I probably should have prefaced this with "try at your own risk, results don't reflect the author's opinions"
replies(1): >>43989943 #
1. dmonitor ◴[] No.43989943[source]
I'm sure it would be trivial to get it to say something incredibly racist, so that's probably a worthwhile disclaimer to put on the website