←back to thread

573 points huntaub | 6 comments | | HN request time: 1.543s | source | bottom

Hey HN, I’m Hunter the founder of Regatta Storage (https://regattastorage.com). Regatta Storage is a new cloud file system that provides unlimited pay-as-you-go capacity, local-like performance, and automatic synchronization to S3-compatible storage. For example, you can use Regatta to instantly access massive data sets in S3 with Spark, Pytorch, or pandas without paying for large, local disks or waiting for the data to download.

Check out an overview of how the service works here: https://www.youtube.com/watch?v=xh1q5p7E4JY, and you can try it for free at https://regattastorage.com after signing up for an account. We wanted to let you try it without an account, but we figured that “Hacker News shares a file system and S3 bucket” wouldn’t be the best experience for the community.

I built Regatta after spending nearly a decade building and operating at-scale cloud storage at places like Amazon’s Elastic File System (EFS) and Netflix. During my 8 years at EFS, I learned a lot about how teams thought about their storage usage. Users frequently told me that they loved how simple and scalable EFS was, and -- like S3 -- they didn’t have to guess how much capacity they needed up front.

When I got to Netflix, I was surprised that there wasn’t more usage of EFS. If you looked around, it seemed like a natural fit. Every application needed a POSIX file system. Lots of applications had unclear or spikey storage needs. Often, developers wanted their storage to last beyond the lifetime of an individual instance or container. In fact, if you looked across all Netflix applications, some ridiculous amount of money was being spent on empty storage space because each of these local drives had to be overprovisioned for potential usage.

However, in many cases, EFS wasn’t the perfect choice for these workloads. Moving workloads from local disks to NFS often encountered performance issues. Further, applications which treated their local disks as ephemeral would have to manually “clean up” left over data in a persistent storage system.

At this point, I realized that there was a missing solution in the cloud storage market which wasn’t being filled by either block or file storage, and I decided to build Regatta.

Regatta is a pay-as-you-go cloud file system that automatically expands with your application. Because it automatically synchronizes with S3 using native file formats, you can connect it to existing data sets and use recently written file data directly from S3. When data isn’t actively being used, it’s removed from the Regatta cache, so you only pay for the backing S3 storage. Finally, we’re developing a custom file protocol which allows us to achieve local-like performance for small-file workloads and Lustre-like scale-out performance for distributed data jobs.

Under the hood, customers mount a Regatta file system by connecting to our fleet of caching instances over NFSv3 (soon, our custom protocol). Our instances then connect to the customer’s S3 bucket on the backend, and provide sub-millisecond cached-read and write performance. This durable cache allows us to provide a strongly consistent, efficient view of the file system to all connected file clients. We can perform challenging operations (like directory renaming) quickly and durably, while they asynchronously propagate to the S3 bucket.

We’re excited to see users share our vision for Regatta. We have teams who are using us to build totally serverless Jupyter notebook servers for their AI researchers who prefer to upload and share data using the S3 web UI. We have teams who are using us as a distributed caching layer on top of S3 for low-latency access to common files. We have teams who are replacing their thin-provisioned Ceph boot volumes with Regatta for significant savings. We can’t wait to see what other things people will build and we hope you’ll give us a try at regattastorage.com.

We’d love to get any early feedback from the community, ideas for future direction, or experiences in this space. I’ll be in the comments for the next few hours to respond!

1. bassp ◴[] No.42174813[source]
This feels, intuitively, like it would be very hard to make crash consistent (given the durable caching layer in between the client and S3). How are you approaching that?
replies(1): >>42174848 #
2. huntaub ◴[] No.42174848[source]
It depends on what you mean by crash-consistent. I would expect that we handle crash-consistency at the client fine (since it is the same crash-consistency of NFSv3) and craash-consistency at the server also fine (since we are able to detect using etags what version of an object is in the backing data storage). Tell me a bit more about what you're thinking.
replies(1): >>42175193 #
3. bassp ◴[] No.42175193[source]
For sure! Upon reflection, maybe I’m less curious about crash consistency (corruption or whatever) per-se, and more about what kinds of durability guarantees I can expect in the presence of a crash.

I’m specifically interested in how you’re handling synchronization between the NFS layer and S3 wrt fsync. The description says that data is “asynchronously” written back out to S3. That implies to me that it’s possible for something like this to happen:

1. I write to a file and fsync it

2. Your NFS layer makes the file durable and returns

3. Your NFS layer crashes (oh no, the intern merged some bad terraform!) before it writes back to S3

4. I go to read the file from S3… and it’s not there!

Is that possible? IE is the only way to get a consistent view of the data by reading “through” the nfs layer, even if I fsync?

replies(1): >>42175348 #
4. huntaub ◴[] No.42175348{3}[source]
So, the step that differs from your concern is Step 3. Let's say that we have a catastrophic availability scenario (as you said, intern comes in and tears down something) -- our job is to make sure that the data in our durable cache remains there (and to put safeguards in place to prevent the intern from hitting that data). If we do that, then any crash of our system will get the data back and be able to apply it to S3. I know that's kind of hand-wavy, but this is how things like AWS S3 work -- just having a super high bar for processes around operations to keep data safe.
replies(2): >>42175401 #>>42175421 #
5. bassp ◴[] No.42175401{4}[source]
Gotcha! Thanks for the answer; so the tl;dr is, if I’m understanding:

“All fsync-ed writes will eventually make it to S3, but fsync successfully returning only guarantees that writes are durable in our NFS caching layer, not in the S3 layer”?

6. huntaub ◴[] No.42175421{4}[source]
For some reason, I don't see a "reply" button to your later comment (maybe there's an HN threading limit), but the answer is yes -- fsync guarantees durability in the Regatta durable cache, not in S3.