←back to thread

262 points rain1 | 1 comments | | HN request time: 0.324s | source
Show context
mjburgess ◴[] No.44442335[source]
Deepseek v1 is ~670Bn which is ~1.4TB physical.

All digitized books ever written/encoded compress to a few TB. The public web is ~50TB. I think a usable zip of all english electronic text publicly available would be on O(100TB). So we're at about 1% of that in model size, and we're in a diminishing-returns area of training -- ie., going to >1% has not yielded improvements (cf. gpt4.5 vs 4o).

This is why compute spend is moving to inference time with "reasoning" models. It's likely we're close to diminshing returns on inference-time compute now too, hence agents whereby (mostly,) deterministic tools are supplementing information /capability into the system.

I think to get any more value out of this model class, we'll be looking at domain-specific specialisation beyond instruction fine-tuning.

I'd guess targeting 1TB inference-time VRAM would be a reasonable medium-term target for high quality open source models -- that's within the reach of most SMEs today. That's about 250bn params.

replies(9): >>44442404 #>>44442633 #>>44442696 #>>44443009 #>>44443088 #>>44443188 #>>44443289 #>>44444740 #>>44449842 #
1. camel-cdr ◴[] No.44449842[source]
> All digitized books ever written/encoded compress to a few TB.

I tied to estimate how much data this actually is:

    # annas archive stats
    papers = 105714890
    books = 52670695
    
    # word count estimates
    avrg_words_per_paper = 10000
    avrg_words_per_book = 100000
    
    words = (papers*avrg_words_per_paper + books*avrg_words_per_book )
    
    # quick text of 27 million words from a few books
    sample_words = 27809550
    sample_bytes = 158824661
    sample_bytes_comp = 28839837 # using zpaq -m5
    
    bytes_per_word = sample_bytes/sample_words
    byte_comp_ratio = sample_bytes_comp/sample_bytes
    word_comp_ratio = bytes_per_word*byte_comp_ratio
    
    print("total:", words*bytes_per_word*1e-12, "TB") # total: 30.10238345855199 TB
    print("compressed:", words*word_comp_ratio*1e-12, "TB") # compressed: 5.466077036085319 TB

So uncompressed ~30 TB and compressed ~5.5 TB of data.

That fits on three 2TB micro SD cards, which you could buy for a total of 750$ from SanDisk.