←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 1 comments | | HN request time: 0.194s | source
Show context
kaycebasques ◴[] No.43964290[source]
Hello, I wrote this. Thank you for reading!

The post was previously discussed 6 months ago: https://news.ycombinator.com/item?id=42013762

To be clear, when I said "embeddings are underrated" I was only arguing that my fellow technical writers (TWs) were not paying enough attention to a very useful new tool in the TW toolbox. I know that the statement sounds silly to ML practitioners, who very much don't "underrate" embeddings.

I know that the post is light on details regarding how exactly we apply embeddings in TW. I have some projects and other blog posts in the pipeline. Short story long, embeddings are important because they can help us make progress on the 3 intractable challenges of TW: https://technicalwriting.dev/strategy/challenges.html

replies(6): >>43964625 #>>43965226 #>>43965364 #>>43965743 #>>43966818 #>>43967786 #
rybosome ◴[] No.43964625[source]
Thanks for the write-up!

I’m curious how you found the quality of the results? This gets into evals which ML folks love, but even just with “vibes” do the results eyeball as reasonable to you?

replies(1): >>43966265 #
1. kaycebasques ◴[] No.43966265[source]
By results I assume that you're asking about the related pages experiment? The results were definitely promising. A lot of the calculated related pages were totally reasonable. E.g. if I'm reading https://www.sphinx-doc.org/en/master/development/html_themes... then it's very reasonable to assume that I may also be interested in https://www.sphinx-doc.org/en/master/usage/theming.html