←back to thread

621 points sebg | 1 comments | | HN request time: 0.271s | source
Show context
londons_explore ◴[] No.43716756[source]
This seems like a pretty complex setup with lots of features which aren't obviously important for a deep learning workload.

Presumably the key necessary features are PB's worth of storage, read/write parallelism (can be achieved by splitting a 1PB file into say 10,000 100GB shards, and then having each client only read the necessary shards), and redundancy

Consistency is hard to achieve and seems to have no use here - your programmers can manage to make sure different processes are writing to different filenames.

replies(2): >>43717054 #>>43717178 #
threeseed ◴[] No.43717178[source]
> Consistency is hard to achieve and seems to have no use here

Famous last words.

It is very common when operating data platforms like this at this scale to lose a lot of nodes over time especially in the cloud. So having a robust consistency/replication mechanism is vital to making sure your training job doesn't need to be restarted just because the block it needs isn't on the particular node.

replies(2): >>43717285 #>>43720737 #
1. londons_explore ◴[] No.43717285[source]
indeed redundancy is fairly important (although the largest part, the training data, actually doesn't matter if chunks are missing).

But the type of consistency they were talking about is strong ordering - the type of thing you might want on a database with lots of people reading and writing tiny bits of data, potentially the same bits of data, and you need to make sure a users writes are rejected if impossible to fulfil, and reads never return an impossible intermediate state. That isn't needed for machine learning.