←back to thread

621 points sebg | 6 comments | | HN request time: 0s | source | bottom
Show context
huntaub ◴[] No.43717158[source]
I think that the author is spot on, there are a couple of dimensions in which you should evaluate these systems: theoretical limits, efficiency, and practical limits.

From a theoretical point of view, like others have pointed out, parallel distributed file systems have existed for years -- most notably Lustre. These file systems should be capable of scaling out their storage and throughput to, effectively, infinity -- if you add enough nodes.

Then you start to ask, well how much storage and throughput can I get with a node that has X TiB of disk -- starting to evaluate efficiency. I ran some calculations (against FSx for Lustre, since I'm an AWS guy) -- and it appears that you can run 3FS in AWS for about 12-30% cheaper depending on the replication factors that you choose against FSxL (which is good, but not great considering that you're now managing the cluster yourself).

Then, the third thing you start to ask is anecdotally, are people able to actually configure these file systems into the size of deployment that I want (which is where you hear things like "oh it's hard to get Ceph to 1 TiB/s") -- and that remains to be seen from something like 3FS.

Ultimately, I obviously believe that storage and data are really important keys to how these AI companies operate -- so it makes sense that DeepSeek would build something like this in-house to get the properties that they're looking for. My hope is that we, at Archil, can find a better set of defaults that work for most people without needing to manage a giant cluster or even worry about how things are replicated.

replies(2): >>43717307 #>>43726407 #
jamesblonde ◴[] No.43717307[source]
Maybe AWS could start by making fast NVMes available - without requiring multi TB disks just to get 1 GB/s. S3FS experiments were run on 14 GB/s NVMe disks - an order of magnitude higher throughput than anything available in AWS today.

SSDs Have Become Ridiculously Fast, Except in the Cloud: https://news.ycombinator.com/item?id=39443679

replies(2): >>43719482 #>>43720293 #
1. kridsdale1 ◴[] No.43719482[source]
On my home LAN connected with 10gbps fiber between MacBook Pro and server, 10 feet away, I get about 1.5gbps vs the non-network speed of the disks of ~50 gbps. (Bits, not bytes)

I worked this out to the macOS SMB implementation really sucking. I set up a NFS driver and it got about twice as fast but it’s annoying to mount and use, and still far from the disk’s capabilities.

I’ve mostly resorted to abandoning the network (after large expense) and using Thunderbolt and physical transport of the drives.

replies(2): >>43719740 #>>43720026 #
2. greenavocado ◴[] No.43719740[source]
Is NFS out of the question?
replies(1): >>43721201 #
3. dundarious ◴[] No.43720026[source]
SMB/CIFS is an incredibly chatty, synchronous protocol. There are/were massive products built around mitigating and working around this when trying to use it over high latency satellite links (US military did/does this).
4. kridsdale1 ◴[] No.43721201[source]
I have set it up but it’s not easy to get drivers working on a Mac.
replies(1): >>43724823 #
5. insaneirish ◴[] No.43724823{3}[source]
What particular drivers are you referring to? NFS is natively supported in MacOS...
replies(1): >>43725922 #
6. olavgg ◴[] No.43725922{4}[source]
That is true, though the implementation is weird.

I mount my nfs shares like this: sudo mount -t nfs -o nolocks -o resvport 192.168.1.1:/tank/data /mnt/data

-o nolocks Disables file locking on the mounted share. Useful if the NFS server or client does not support locking, or if there are issues with lock daemons. On macOS, this is often necessary because lockd can be flaky.

-o resvport Tells the NFS client to use a reserved port (<1024) for the connection. Some NFS servers (like some Linux configurations or *BSDs with stricter security) only accept requests from clients using reserved ports (for authentication purposes).