←back to thread

223 points benkaiser | 2 comments | | HN request time: 0s | source
Show context
asim ◴[] No.42538054[source]
I had similar thoughts about using it for the Quran. I think this highlights you have to be very specific in your use cases especially when expecting an exact response on static text that shouldn't change. This is why I'm trying something a bit different. I've generated embeddings for the Quran and use chromem-go for this. So I'll ask the index the question first based on a similarity search and then feed the results in as context to an LLM. But in the response I'll still sight the references so I can see what they were. It's not perfect but a first step towards something. I think they call this RAG.

What I'm working on https://reminder.dev

replies(6): >>42538137 #>>42538188 #>>42538217 #>>42543891 #>>42545257 #>>42548243 #
1. kamikazeturtles ◴[] No.42538137[source]
I found LLMs to be really good for Quran studies. Especially for questions where Google is unreliable.

In one instance I was trying to remember if it was in the Bible or the Quran where, in the story of Abraham, the pagans are asked why they believe what they believe and they respond with "because our fathers believed" and the scripture critiqued this. ChatGPT gave me the exact verses from the Quran while Google would bring up random unrelated forum posts.

It's also good for comparing religious texts and seeing where stories differ.

replies(1): >>42542847 #
2. int_19h ◴[] No.42542847[source]
GPT-4 specifically seems to have a very good knowledge of the Quran, such that you can ask it for a specific surah and ayah and it'll quote it exactly in Arabic.