The current text splitting approaches rely on heuristics (although one can use neural embedder to group semantically related sentences).
I propose a fully neural approach to semantic chunking.
I took the base distilbert model and trained it on a bookcorpus to split concatenated text paragraphs into original paragraphs. Basically it’s a token classification task. Model fine-tuning took day and a half on a 2x1080ti.
The library could be used as a text splitter module in a RAG system or for splitting transcripts for example.
The usage pattern that I see is the following: strip all the markup tags to produce pure text and feed this text into the model.
The problem is that although in theory this should improve overall RAG pipeline performance I didn’t manage to measure it properly. Other limitations: the model only supports English for now and the output text is downcased.
Please give it a try. I'll appreciate a feedback.
The Python library: https://github.com/mirth/chonky
The transformer model: https://huggingface.co/mirth/chonky_distilbert_base_uncased_...
It uses a similar approach but the focus is on sentence/paragraph segmentation generally and not specifically focused on RAG. It also has some benchmarks. Might be a good source of inspiration for where to take chonky next.
It shouldn't break sentences at commas, right?
As for RAG, I haven't noticed LLMs struggling with poorly structured text (e.g. the YouTube wall of text blob can just be fed directly into LLMs), though I haven't measured this.
In fact my own "webgrep" (convert top 10 search results into text and run grep on them, optionally followed by LLM summary) works on the byte level (gave up chunking words, sentences and paragraphs entirely): I just shove the 1kb before and after the match into the context. This works fine because LLMs just ignore the "mutilated" word parts at the beginning and end.
The only downside of this approach is that if I was the LLM, I would probably be unhappy with my job!
As for semantic chunking (in the context of, maximize the relevance of stuff that goes into the LLM, or indeed as a semantic search for the user), I haven't solved it yet, but I can share one amusing experiment: to find the relevant part of the text (having already returned a mostly-relevant big chunk of text), chop off one sentence at a time and re-run the similarity check! So you "distil" the text down to that which is most relevant (according to the embedding model) to the user query.
This is very slow and stupid, especially in real-time (though kinda fun to watch), but kinda works for the "approximately one sentence answers my question" scenario. A much cheaper approximation here would just be to embed at the sentence level as well as the page/paragraph level.
We still want chunking in practice to avoid LLM confusion, undifferentiated embeddings, and handling large datasets at lower cost + large volumes. Large context means we can now tolerate multi-paragraph/page, so more like chunk by coherent section.
In theory we can do entire chapter/book, but those other concerns come in, so I only see more niche tools or talk-to-your-PDF do that.
At the same time, embedding is often a significant cost in above scenarios, so I'm curious about the semantic chunking overheads..
In the naive chunking approach, we would grab random sections of line items from these tables because they happen to reference some similar text to the search query, but there’s no guarantee the data pulled into context is complete.
The training objective is clever.
The 50+ filters at Ecodash.ai for 90,000 plants came from a custom RAG model on top of 800,000 raw web pages. Because LLM’s are expensive, chunking and semantic search for figuring out what to feed into the LLM for inference is a key part of the pipeline nobody talks about. I think what I did was: run all text through the cheapest OpenAI embeddings API… then, I recall that nearest neighbor vector search wasn’t enough to catch all relevant information, for a given query to be answered by an LLM. So, I remember generating a large number of diverse queries, which mean the same thing (e.g. “plant prefers full sun”, “plant thrives in direct sunlight”, “… requires at least 6 hours of light per day”, …) and then doing nearest neighbor vector search on all queries, and using the statistics to choose what to semantically feed into RAG.
The idea was that paragraphs are naturally how we segment distinct thoughts in text, and would translate well to segmenting long video clips. It actually worked pretty well! It was able to predict the paragraph breaks in many texts that it wasn’t trained on at all.
The problems at the time were around context length and dialog style formatting.
I wanted to try and approach the problem in a less brute force way by maybe using sentence embedding and calculating the probability of a sentence being a “paragraph ending” sentence - which would likely result in a much smaller model.
Anyway this is really cool! I’m excited to dive in further to what you’ve done!
It splits an input text into equal sized chunks using DFS and parallelization (rayon) to do so relatively quickly.
However, the goal for me is to use a n LLM to split text by topic. I’m thinking I will implement it as an API saas service on top of it being OSS. Do you think it’s a viable business? You send a library of text, and receive a library of single topic context chunks as output.