Most active commenters

    28 points addaon | 18 comments | | HN request time: 1.625s | source | bottom
    1. tuananh ◴[] No.42190811[source]
    it's 16TB of DDR5 btw
    replies(1): >>42190905 #
    2. metadat ◴[] No.42190905[source]
    Yes, 128x128.

    Good for a database, maybe.

    What else?

    replies(6): >>42191283 #>>42191285 #>>42191559 #>>42191737 #>>42191960 #>>42192376 #
    3. rustcleaner ◴[] No.42191283{3}[source]
    Large Language Models.
    4. rustcleaner ◴[] No.42191285{3}[source]
    Qubes OS.
    5. omgin ◴[] No.42191343[source]
    Assuming 8MB per instance, in theory I could run over 2,000,000 copies of DOOM on this thing at the same time.

    Would love to know what the framerate would be

    Hope I get crazy rich one day so I can spend money doing stupid stuff like this.

    replies(1): >>42191451 #
    6. throwup238 ◴[] No.42191451[source]
    Now you just need to figure out how to simulate transistors in an instance of the game, so that you can port DOOM to run on a 2,000,000 transistor DOOMputer.
    7. moomoo11 ◴[] No.42191559{3}[source]
    Dumb question but why don’t we see more cracked out high memory machines? I mean like 1 petabyte RAM.

    Or do these already exist

    replies(2): >>42192410 #>>42194227 #
    8. smolder ◴[] No.42191737{3}[source]
    Serving remote desktops to several hundred developers. Maybe a video content server for a netflix or youtube type business. Hosting a large search index? Some kind of scientific computing?
    9. HeatrayEnjoyer ◴[] No.42191960{3}[source]
    A half dozen GPT-4 instances
    replies(1): >>42195659 #
    10. yetihehe ◴[] No.42191969[source]
    Anyone has any idea what throughput you can achieve with this? Is it simply 128x5600MT which would mean 700GB/s?
    replies(1): >>42192195 #
    11. smolder ◴[] No.42192195[source]
    64 channels, 2 DIMMs per channel, so I guess half that.
    12. guenthert ◴[] No.42192376{3}[source]
    Numeric simulation (HPC). Some, not all, simulations need lots of memory. In 2018 larger servers running such had 1TiB, so I'm not the least surprised that six years later it's 16.
    13. guenthert ◴[] No.42192410{4}[source]
    I'd think the market share for applications which need huge amount of space, but little CPU processing power and memory transfer rate is rather small.

    Lenovo's slides indicate that they foresee this server be used for in-memory data bases.

    Weren't there also distributed fs where the meta-data server couldn't be scaled out?

    14. znpy ◴[] No.42193082[source]
    It sounds less interesting when you realize the system has four processors, so you're getting "only" four terabytes for cpu, which isn't that much more than what you can currently do on

    Some applications get latency spikes when dealing with numa systems.

    15. eqvinox ◴[] No.42194227{4}[source]
    We don't see more of these machines because most tasks are better served by a higher number of smaller machines. The only benefit of boxes like this is having all of that RAM in one box. Very few use cases need that.
    replies(1): >>42200049 #
    16. metadat ◴[] No.42195659{4}[source]
    LLM inference processors (GPUs) don't use DDR, it uses special, costly stacked HBM ram soldered to the board.

    I tested out running Llama on a 512GB machine, it's rather slow and inefficient. Maybe 1-token/sec.

    17. moomoo11 ◴[] No.42200049{5}[source]
    Would be fun for a graph db