I had similar thoughts about using it for the Quran. I think this highlights you have to be very specific in your use cases especially when expecting an exact response on static text that shouldn't change. This is why I'm trying something a bit different. I've generated embeddings for the Quran and use chromem-go for this. So I'll ask the index the question first based on a similarity search and then feed the results in as context to an LLM. But in the response I'll still sight the references so I can see what they were. It's not perfect but a first step towards something. I think they call this RAG.
What I'm working on https://reminder.dev
replies(6):