←back to thread

Embeddings are underrated (2024)

(technicalwriting.dev)
484 points jxmorris12 | 2 comments | | HN request time: 0.498s | source
Show context
jasonjmcghee ◴[] No.43964913[source]
Another very cool attribute of embeddings and embedding search is that they are resource cheap enough that you can perform them client side.

ONNX models can be loaded and executed with transformer.js https://github.com/huggingface/transformers.js/

You can even build and statically host indices like hnsw for embeddings.

I put together a little open source demo for this here https://jasonjmcghee.github.io/portable-hnsw/ (it's a prototype / hacked together approximation of hnsw, but you could implement the real thing)

Long story short, represent indices as queryable parquet files and use duckdb to query them.

Depending on how you host, it's either free or nearly free. I used Github Pages so it's free. R2 with cloudflare would only cost the size what you store (very cheap- no egress fees).

replies(3): >>43965038 #>>43966350 #>>43966793 #
1. rrr_oh_man ◴[] No.43966793[source]
Can you elaborate what is happening? The results don't really make sense to me.
replies(1): >>43968761 #
2. jasonjmcghee ◴[] No.43968761[source]
You should be able to search for vibes and get text with similar vibes.

Try like "I'm scared" or "running fast" - whatever you want, and the results will be of passages that are semantically similar.

---

It's an embedding search using heirarchical navigable small worlds.

I chunked up a few corpora of text (books, blogs, etc) and embedded them and indexed them.

But the indexing strategy split things into a nodes parquet file and an edges parquet file, so that they could be served from a cdn and searched client side. (No running vector database)