←back to thread

176 points nxa | 2 comments | | HN request time: 0.505s | source

I've been playing with embeddings and wanted to try out what results the embedding layer will produce based on just word-by-word input and addition / subtraction, beyond what many videos / papers mention (like the obvious king-man+woman=queen). So I built something that doesn't just give the first answer, but ranks the matches based on distance / cosine symmetry. I polished it a bit so that others can try it out, too.

For now, I only have nouns (and some proper nouns) in the dataset, and pick the most common interpretation among the homographs. Also, it's case sensitive.

Show context
cabalamat ◴[] No.43988904[source]
What does it mean when it surrounds a word in red? Is this signalling an error?
replies(3): >>43988929 #>>43988992 #>>43989055 #
1. nxa ◴[] No.43988929[source]
Yes, word in red = word not found mostly the case when you try plurals or non-nouns (for now)
replies(1): >>43989067 #
2. rpastuszak ◴[] No.43989067[source]
This is neat!

I think you need to disable auto-capitalisation because on mobile the first word becomes uppercase and triggers a validation error.