←back to thread

276 points Fendy | 6 comments | | HN request time: 0.206s | source | bottom
Show context
cpursley ◴[] No.45171865[source]
Postgres has pgvector. Postgres is where all of my data already lives. It’s all open source and runs anywhere. What am I missing with the specialty vector stores?
replies(1): >>45171919 #
CuriouslyC ◴[] No.45171919[source]
latency, actual retrieval performance, integrated pipelines that do more than just vector search to produce better results, the list goes on.

Postgres for vector search is fine for toy products or stuff that's outside the hot loop of your business but for high performance applications it's just inadequate.

replies(1): >>45171952 #
cpursley ◴[] No.45171952[source]
For the vast majority of applications, the trade off is worth keeping everything in Postgres vs operational overhead of some VC hype data store that won’t be around in 5 years. Most people learned this lesson with Mongo (postgrest jsonb is now good enough for 90% of scenarios).
replies(3): >>45171998 #>>45172223 #>>45172941 #
1. CuriouslyC ◴[] No.45172223[source]
I'm a legit postgres fanboy, my comment history will back this up, but the ops overhead and performance implications of trying to run PGvector as your core vector store for everything is just silly, you're going to be doing all sorts of postgres replication gymnastics to make up for the fact that you're using the wrong tool for the job. It's good for prototyping and small/non-core workloads, use it outside that scope at your own peril.
replies(3): >>45172576 #>>45172826 #>>45173775 #
2. cpursley ◴[] No.45172576[source]
Guess I'm just not webscale™
3. alastairr ◴[] No.45172826[source]
Interested to hear any more on this. I've been using pinecone for ages, but they recently increased the cost floor for serverless. I've been thinking of moving everything to pgvector (1M ish, so not loads), as all the bigger meta data lives there anyway. But I'd be interested to hear any views on that.
replies(2): >>45172953 #>>45173691 #
4. whakim ◴[] No.45172953[source]
At 1M embeddings I'd think pgvector would do just fine assuming a sufficiently powerful database.
5. CuriouslyC ◴[] No.45173691[source]
It depends on your flow honestly. If you're just using your vectors for where filters on domain objects and you don't have hundreds of millions of vectors PGVec is fine. If you have any sort of workflow where you need low latency access to vectors and reliable random read performance, or where vector work is the bottleneck on performance, PGVec goes tits up.
6. j45 ◴[] No.45173775[source]
Appreciate the clarification. I have been using it for small / medium things and it's been OK.

The everything postgres as long as reasonably possible approach is fun, but not something I expect to last for ever.