←back to thread

169 points hessdalenlight | 1 comments | | HN request time: 3.999s | source

TLDR: I’ve made a transformer model and a wrapper library that segments text into meaningful semantic chunks.

The current text splitting approaches rely on heuristics (although one can use neural embedder to group semantically related sentences).

I propose a fully neural approach to semantic chunking.

I took the base distilbert model and trained it on a bookcorpus to split concatenated text paragraphs into original paragraphs. Basically it’s a token classification task. Model fine-tuning took day and a half on a 2x1080ti.

The library could be used as a text splitter module in a RAG system or for splitting transcripts for example.

The usage pattern that I see is the following: strip all the markup tags to produce pure text and feed this text into the model.

The problem is that although in theory this should improve overall RAG pipeline performance I didn’t manage to measure it properly. Other limitations: the model only supports English for now and the output text is downcased.

Please give it a try. I'll appreciate a feedback.

The Python library: https://github.com/mirth/chonky

The transformer model: https://huggingface.co/mirth/chonky_distilbert_base_uncased_...

Show context
suddenlybananas ◴[] No.43671086[source]
I feel you could improve your README.md considerably just by showing the actual output of the little snippet you show.
replies(2): >>43671131 #>>43671209 #
1. HeavyStorm ◴[] No.43671131[source]
Came here to write exactly that. The author includes a large sentence in the sample, so it should show us the output.