←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 1 comments | | HN request time: 1.92s | source
Show context
jacobr1 ◴[] No.43964219[source]
I may have missed it ... but were any direct applications to tech writers discussed in this article? Embeddings are fascinating and very important for things like LLMs or semantic search, but the author seems to imply more direct utility.
replies(4): >>43964349 #>>43964388 #>>43964584 #>>43964664 #
1. PaulHoule ◴[] No.43964349[source]
Semantic search and classification and clustering. For the first, there is a substantial breakthrough in IR every 10 years or so you take what you can get. (I got so depressed reading TREC proceedings which seemed to prove that "every obvious idea to improve search relevance doesn't work" and it wasn't until I found a summary of the first ten years that I learned that the first ten years had turned up one useful result, BM2.5)

As for classification, it is highly practical to put a text through an embedding and then run the embedding through a classical ML algorithm out of

https://scikit-learn.org/stable/supervised_learning.html

This works so consistently that I'm considering not packing in a bag-of-words classifier in a text classification library I'm working on. People who hold court on Huggingface forums tends to believe you can do better with fine-tuned BERT, and I'd agree you can do better with that, but training time is 100x and maybe you won't.

20 years ago you could make bag-of-word vectors and put them through a clustering algorithm

https://scikit-learn.org/stable/modules/clustering.html

and it worked but you got awful results. With embeddings you can use a very simple and fast algorithm like

https://scikit-learn.org/stable/modules/clustering.html#k-me...

and get great clusters.

I'd disagree with the bit that it takes "a lot of linear algebra" to find nearby vectors, it can be done with a dot product so I'd say it is "a little linear algebra"