Most active commenters

    ←back to thread

    28 points addaon | 11 comments | | HN request time: 0.608s | source | bottom
    Show context
    tuananh ◴[] No.42190811[source]
    it's 16TB of DDR5 btw
    replies(1): >>42190905 #
    1. metadat ◴[] No.42190905[source]
    Yes, 128x128.

    Good for a database, maybe.

    What else?

    replies(6): >>42191283 #>>42191285 #>>42191559 #>>42191737 #>>42191960 #>>42192376 #
    2. rustcleaner ◴[] No.42191283[source]
    Large Language Models.
    3. rustcleaner ◴[] No.42191285[source]
    Qubes OS.
    4. moomoo11 ◴[] No.42191559[source]
    Dumb question but why don’t we see more cracked out high memory machines? I mean like 1 petabyte RAM.

    Or do these already exist

    replies(2): >>42192410 #>>42194227 #
    5. smolder ◴[] No.42191737[source]
    Serving remote desktops to several hundred developers. Maybe a video content server for a netflix or youtube type business. Hosting a large search index? Some kind of scientific computing?
    6. HeatrayEnjoyer ◴[] No.42191960[source]
    A half dozen GPT-4 instances
    replies(1): >>42195659 #
    7. guenthert ◴[] No.42192376[source]
    Numeric simulation (HPC). Some, not all, simulations need lots of memory. In 2018 larger servers running such had 1TiB, so I'm not the least surprised that six years later it's 16.
    8. guenthert ◴[] No.42192410[source]
    I'd think the market share for applications which need huge amount of space, but little CPU processing power and memory transfer rate is rather small.

    Lenovo's slides indicate that they foresee this server be used for in-memory data bases.

    Weren't there also distributed fs where the meta-data server couldn't be scaled out?

    9. eqvinox ◴[] No.42194227[source]
    We don't see more of these machines because most tasks are better served by a higher number of smaller machines. The only benefit of boxes like this is having all of that RAM in one box. Very few use cases need that.
    replies(1): >>42200049 #
    10. metadat ◴[] No.42195659[source]
    LLM inference processors (GPUs) don't use DDR, it uses special, costly stacked HBM ram soldered to the board.

    I tested out running Llama on a 512GB machine, it's rather slow and inefficient. Maybe 1-token/sec.

    11. moomoo11 ◴[] No.42200049{3}[source]
    Would be fun for a graph db