As for classification, it is highly practical to put a text through an embedding and then run the embedding through a classical ML algorithm out of
https://scikit-learn.org/stable/supervised_learning.html
This works so consistently that I'm considering not packing in a bag-of-words classifier in a text classification library I'm working on. People who hold court on Huggingface forums tends to believe you can do better with fine-tuned BERT, and I'd agree you can do better with that, but training time is 100x and maybe you won't.
20 years ago you could make bag-of-word vectors and put them through a clustering algorithm
https://scikit-learn.org/stable/modules/clustering.html
and it worked but you got awful results. With embeddings you can use a very simple and fast algorithm like
https://scikit-learn.org/stable/modules/clustering.html#k-me...
and get great clusters.
I'd disagree with the bit that it takes "a lot of linear algebra" to find nearby vectors, it can be done with a dot product so I'd say it is "a little linear algebra"