←back to thread

621 points sebg | 2 comments | | HN request time: 0.39s | source
Show context
huntaub ◴[] No.43717158[source]
I think that the author is spot on, there are a couple of dimensions in which you should evaluate these systems: theoretical limits, efficiency, and practical limits.

From a theoretical point of view, like others have pointed out, parallel distributed file systems have existed for years -- most notably Lustre. These file systems should be capable of scaling out their storage and throughput to, effectively, infinity -- if you add enough nodes.

Then you start to ask, well how much storage and throughput can I get with a node that has X TiB of disk -- starting to evaluate efficiency. I ran some calculations (against FSx for Lustre, since I'm an AWS guy) -- and it appears that you can run 3FS in AWS for about 12-30% cheaper depending on the replication factors that you choose against FSxL (which is good, but not great considering that you're now managing the cluster yourself).

Then, the third thing you start to ask is anecdotally, are people able to actually configure these file systems into the size of deployment that I want (which is where you hear things like "oh it's hard to get Ceph to 1 TiB/s") -- and that remains to be seen from something like 3FS.

Ultimately, I obviously believe that storage and data are really important keys to how these AI companies operate -- so it makes sense that DeepSeek would build something like this in-house to get the properties that they're looking for. My hope is that we, at Archil, can find a better set of defaults that work for most people without needing to manage a giant cluster or even worry about how things are replicated.

replies(2): >>43717307 #>>43726407 #
jamesblonde ◴[] No.43717307[source]
Maybe AWS could start by making fast NVMes available - without requiring multi TB disks just to get 1 GB/s. S3FS experiments were run on 14 GB/s NVMe disks - an order of magnitude higher throughput than anything available in AWS today.

SSDs Have Become Ridiculously Fast, Except in the Cloud: https://news.ycombinator.com/item?id=39443679

replies(2): >>43719482 #>>43720293 #
__turbobrew__ ◴[] No.43720293[source]
There are i4i instances in AWS which can get you a lot of IOPS with a smaller disk.
replies(2): >>43725223 #>>43725914 #
jamesblonde ◴[] No.43725914[source]
Had a look - Baseline disk throughput is 78.12 MB/s. Max throughput (30 mins/day) is 1250 MB/s.

NVMe i bought for 150 dollars with 4 TBs capacity gives me 6000 MB/s sustained

https://docs.aws.amazon.com/ec2/latest/instancetypes/so.html

replies(2): >>43728016 #>>43729920 #
__turbobrew__ ◴[] No.43729920[source]
You are incorrect, the numbers you are quoted is EBS volume performance. iX instances have directly attached NVME volumes which are separate from EBS.

> NVMe i bought for 150 dollars

Sure, now cost out the rest of the server, the racks, the colocation space for racks, power, multiple AZ redundancy, a clos network fabric, network peering, the spare hardware for failures, off site backups, supply chain management, a team of engineers to design the system, a team of staff to physically rack new hardware and unrack it, a team of engineers to manage the network, on call rotations for all those teams.

Sure the NVME is just $150 bro.

replies(1): >>43741472 #
1. jamesblonde ◴[] No.43741472[source]
You claim I am incorrect, but you don't provide a reference or numbers, which I couldn't find.
replies(1): >>43741691 #
2. __turbobrew__ ◴[] No.43741691[source]
AWS doesn't provide throughput numbers for the NVME on iX instances. You have to look at benchmarks or test it out yourself. Similar to packets per second limits which are not published either and can only be inferred through benchmarks.