Most active commenters
  • huntaub(134)
  • mdaniel(4)
  • eerikkivistik(4)
  • ignoramous(4)
  • bassp(3)
  • mritchie712(3)
  • weinzierl(3)
  • garganzol(3)
  • benatkin(3)
  • dangoodmanUT(3)

572 points huntaub | 312 comments | | HN request time: 2.096s | source | bottom

Hey HN, I’m Hunter the founder of Regatta Storage (https://regattastorage.com). Regatta Storage is a new cloud file system that provides unlimited pay-as-you-go capacity, local-like performance, and automatic synchronization to S3-compatible storage. For example, you can use Regatta to instantly access massive data sets in S3 with Spark, Pytorch, or pandas without paying for large, local disks or waiting for the data to download.

Check out an overview of how the service works here: https://www.youtube.com/watch?v=xh1q5p7E4JY, and you can try it for free at https://regattastorage.com after signing up for an account. We wanted to let you try it without an account, but we figured that “Hacker News shares a file system and S3 bucket” wouldn’t be the best experience for the community.

I built Regatta after spending nearly a decade building and operating at-scale cloud storage at places like Amazon’s Elastic File System (EFS) and Netflix. During my 8 years at EFS, I learned a lot about how teams thought about their storage usage. Users frequently told me that they loved how simple and scalable EFS was, and -- like S3 -- they didn’t have to guess how much capacity they needed up front.

When I got to Netflix, I was surprised that there wasn’t more usage of EFS. If you looked around, it seemed like a natural fit. Every application needed a POSIX file system. Lots of applications had unclear or spikey storage needs. Often, developers wanted their storage to last beyond the lifetime of an individual instance or container. In fact, if you looked across all Netflix applications, some ridiculous amount of money was being spent on empty storage space because each of these local drives had to be overprovisioned for potential usage.

However, in many cases, EFS wasn’t the perfect choice for these workloads. Moving workloads from local disks to NFS often encountered performance issues. Further, applications which treated their local disks as ephemeral would have to manually “clean up” left over data in a persistent storage system.

At this point, I realized that there was a missing solution in the cloud storage market which wasn’t being filled by either block or file storage, and I decided to build Regatta.

Regatta is a pay-as-you-go cloud file system that automatically expands with your application. Because it automatically synchronizes with S3 using native file formats, you can connect it to existing data sets and use recently written file data directly from S3. When data isn’t actively being used, it’s removed from the Regatta cache, so you only pay for the backing S3 storage. Finally, we’re developing a custom file protocol which allows us to achieve local-like performance for small-file workloads and Lustre-like scale-out performance for distributed data jobs.

Under the hood, customers mount a Regatta file system by connecting to our fleet of caching instances over NFSv3 (soon, our custom protocol). Our instances then connect to the customer’s S3 bucket on the backend, and provide sub-millisecond cached-read and write performance. This durable cache allows us to provide a strongly consistent, efficient view of the file system to all connected file clients. We can perform challenging operations (like directory renaming) quickly and durably, while they asynchronously propagate to the S3 bucket.

We’re excited to see users share our vision for Regatta. We have teams who are using us to build totally serverless Jupyter notebook servers for their AI researchers who prefer to upload and share data using the S3 web UI. We have teams who are using us as a distributed caching layer on top of S3 for low-latency access to common files. We have teams who are replacing their thin-provisioned Ceph boot volumes with Regatta for significant savings. We can’t wait to see what other things people will build and we hope you’ll give us a try at regattastorage.com.

We’d love to get any early feedback from the community, ideas for future direction, or experiences in this space. I’ll be in the comments for the next few hours to respond!

1. ahstilde ◴[] No.42174255[source]
i'm not in storage SaaS, so nooby question - how is this different from Snowflake or Databricks?
replies(1): >>42174291 #
2. huntaub ◴[] No.42174291[source]
Thanks for the question!

Snowflake and Databricks aren't storage products, but are managed compute platforms on top of storage that probably looks a lot like this. Snowflake allows you to easily connect different data sets to your data warehouse, and Databricks provides a managed analytics (Spark) offering.

Regatta, on the other hand, would allow you to more easily build the next Snowflake or Databricks by taking advantage of the same low-cost, unlimited storage in S3 that they likely use.

3. koolba ◴[] No.42174305[source]
Neat stuff. I think everybody with an interest in NFS has toyed with this idea at some point.

> Under the hood, customers mount a Regatta file system by connecting to our fleet of caching instances over NFSv3 (soon, our custom protocol). Our instances then connect to the customer’s S3 bucket on the backend, and provide sub-millisecond cached-read and write performance. This durable cache allows us to provide a strongly consistent, efficient view of the file system to all connected file clients. We can perform challenging operations (like directory renaming) quickly and durably, while they asynchronously propagate to the S3 bucket.

How do you handle the cache server crashing before syncing to S3? Do the cache servers have local disk as well?

Ditto for how to handle intermittent S3 availability issues?

What are the fsync guarantees for file append operations and directories?

replies(1): >>42174365 #
4. Jayakumark ◴[] No.42174329[source]
How does this compare to https://github.com/awslabs/mountpoint-s3 ?
replies(1): >>42174379 #
5. huntaub ◴[] No.42174365[source]
Thanks for the question!

> How do you handle the cache server crashing before syncing to S3? Do the cache servers have local disk as well?

Our caching layer is highly durable, which is (in my opinion) the key for doing this kind of staging. This means that once a write is complete to Regatta, we guarantee that it will eventually complete on S3.

For this reason, server crashes and intermittent S3 availability issues are not a problem because we have the writes stored safely.

> What are the fsync guarantees for file append operations and directories?

We have strong, read-after-write consistency for all connected file system clients -- including for operations which aren't possible to perform on S3 efficiently (such as renames, appends, etc). We asynchronously push those writes to S3, so there may be a few minutes before you can access them directly from the bucket. But, during this time, the file system interface will always reflect the up-to-date view.

replies(3): >>42174934 #>>42175879 #>>42175912 #
6. jeffbee ◴[] No.42174372[source]
People have been throwing out "POSIX" distributed file systems for a long time but this claim usually raises more questions than it answers. Especially since clients access it via NFSv3, which has extremely weak semantics and leaves most POSIX filesystem features unimplemented.
replies(2): >>42174401 #>>42174498 #
7. huntaub ◴[] No.42174379[source]
Thanks for the question! Mountpoint for Amazon S3 is a FUSE layer that doesn't support full POSIX semantics. For example, you can't use Mountpoint for Amazon S3 for random writes to existing files, appends, or renames. This means that you have to carefully instrument your application to understand whether or not it's compatible with Mountpoint, which can be error-prone. Regatta, on the other hand, provides full POSIX compatibility for the file interface, which means that it works out-of-the-box with all file based applications.
replies(2): >>42174506 #>>42174554 #
8. krawczstef ◴[] No.42174389[source]
Does this compete with Minio?
replies(1): >>42174419 #
9. huntaub ◴[] No.42174401[source]
I think this is a great call out, and you're correct. One example that comes to mind is that NFSv3 doesn't support flags on the rename() operation (such as RENAME_WHITEOUT), which means that you can't use them as an overlay upperdir (which is desirable for building container runtimes). To solve this, we're working on a custom protocol that we intend to place in the Linux kernel which will expose a broader set of features than we can get in NFS. As I tell people, this is the worst version of Regatta that will ever exist -- we're going to make it better every day.
10. huntaub ◴[] No.42174419[source]
I don't think so, I see them as complementary. MinIO is great when you have downstream applications which speak the S3 API that need acceleration of that data. Regatta is designed for applications which speak file semantics (think, application logging, storing corpuses of training data, or state) that doesn't run on the S3 API. Regatta actually supports MinIO as an S3-compatible backend for your file system!
replies(2): >>42174615 #>>42177108 #
11. foft ◴[] No.42174423[source]
Interesting. Reminds me of FlexFS (https://flexfs.io/). I spoke to a very knowledgeable person there when investigating what to use but we ended up using EFS instead.

An annoying feature of EFS is how it scales with amount of storage, so when its empty its very slow. We also started hitting its limits so could not scale our compute workers. Both can be solved by paying for the elastic iops but that is VERY expensive.

replies(2): >>42174491 #>>42175443 #
12. remram ◴[] No.42174429[source]
Is this like JuiceFS? https://juicefs.com/
replies(1): >>42174449 #
13. mbrumlow ◴[] No.42174443[source]
Is every file a s3 object? What if you change the middle of a large file?
replies(1): >>42174462 #
14. huntaub ◴[] No.42174449[source]
It's similar to JuiceFS, but JuiceFS writes and reads data from S3 in a proprietary block format. This means that you cannot connect JuiceFS to existing data sets in S3, and you cannot use data written through JuiceFS from the S3 API directly. On the other hand, Regatta reads and writes data to S3 using it's native format -- so you can do these things!
15. huntaub ◴[] No.42174462[source]
That's correct -- every file is an S3 object. If you change the middle of a large file, Regatta will store the change on our durable caching layer efficiently (and most writes complete in under 1ms). Regatta will then asynchronously update the large object in S3, which may take longer. We automatically batch multiple changes together to minimize the number of operations to your S3 bucket!
replies(1): >>42174736 #
16. mr90210 ◴[] No.42174466[source]
That’s so nice see, because in the few days I had been tinkering with the concept of file system + blob storage but I had hard time com up with use-cases other than an unlimited Dropbox where you own the storage and truly pay as you go.
replies(1): >>42174510 #
17. huntaub ◴[] No.42174491[source]
Yes, I think it's similar product, but we're looking to provide high performance on all dimensions (latency, throughput, and IOPS). I totally agree with you that Elastic Throughput solves this problem, but it can be expensive for many workloads!
replies(1): >>42191049 #
18. crest ◴[] No.42174498[source]
You can implement a single client NFSv3 server that provides stronger than expected (of NFSv3) guarantees and if you implement the "optional" companion protocols it should come closer to local filesystem semantics than most network filesystems. What would be neat about such a solution is that you can run the server either locally or remotely (same site, high bandwidth, low latency) and at the same time clients would have to a custom FUSE server or even worse load an (from the customer's point of view) experimental vendor kernel module. Upgrading from NFSv3 to NFSv4 would get you a bit closer to POSIX semantics, but of course it would still be NFS just not over a congested, jittery link to a shared server. Especially NFSv4 delegations could be a nice way to let the clients kernel buffer a lot of bursty async I/O locally. Just keep in mind how little POSIX really guarantees instead of assuming it will behave like ext4/XFS or even better ZFS on a laptop NVMe with two levels of power loss protection (big caps in the drive and the laptop battery).
replies(1): >>42174546 #
19. fermigier ◴[] No.42174501[source]
TL;DR: is this a cloud service or an on-premise thing?
replies(1): >>42174523 #
20. memco ◴[] No.42174506{3}[source]
Does Regatta require a local disk sized for the entire file to support random writes? One problem I’ve seen is that we have set up instances with a modest local disk but then work with files for which we need to pull the whole file into a local cache modify some parts and then push the full result back into s3. It would be helpful to have a way to work with s3 as though it were posix without having to match the local disk size to the largest file we might need to process.
replies(1): >>42174606 #
21. huntaub ◴[] No.42174510[source]
I think that "owning the storage" is such an important part of this. I'm excited that folks who use this will continue to have access to their data directly through S3, so if they ever decide to move off of Regatta, all their data is still right there. This is also important at large companies which already have compliance and governance workflows that connect to data in S3 -- Regatta enables them to continue to use those workflows without having to think about another primary storage system.
22. mdaniel ◴[] No.42174516[source]
I dunno if this is considered off-topic, since it's commentary about the website, but that's twice in the past week I've seen a launch website that must have used a template or something because almost all the links in the footer are href="#". If you don't have Careers, Privacy Policy, Terms, or an opinion about Cookies, then just nuke those links
replies(1): >>42174527 #
23. huntaub ◴[] No.42174523[source]
This is a managed cloud service. If you're interested in using Regatta on-premises, I'd love to hear from you -- shoot me some mail at hleath [at] regattastorage.com
24. huntaub ◴[] No.42174527[source]
Great call out -- we'll get that done. Thanks!
replies(1): >>42174735 #
25. huntaub ◴[] No.42174546{3}[source]
I think this is exactly right, but there are lots of people who don't want to manage their own NFS servers -- that's who we're targeting with Regatta. Notably, I think that v4 delegations gets you close but not close enough to the performance that we're looking for. For example, you can't get a delegation for a directory (which means that you're still doing round trips for CREATE and UNLINK), which seems to be the case even with "nocto". But, I need to spend more time playing around with that.
26. scottlamb ◴[] No.42174554{3}[source]
> For example, you can't use Mountpoint for Amazon S3 for random writes to existing files, appends, or renames.

Can you support these operations with the expected semantics and performance?

If the application makes a one-byte change to a giant file and calls fdatasync, what happens? Do you re-upload the entire file to S3?

How do you handle a rename? Applications commonly do this for atomic replacement on POSIX and expect three properties from this operation:

* fast. * destination always points to either the original or new afterward (on success or failure); no scenario at which it's lost/truncated. * no extra storage used (on success or failure).

Do you guarantee any of those? How? I don't see an obvious way from the S3 HTTP API.

Given that POSIX API doesn't support things like arbitrary per-operation deadlines/timeouts, do you think it's suitable as a distributed filesystem API at all? Why?

replies(1): >>42174592 #
27. mdaniel ◴[] No.42174566[source]
> Currently, only the us-east-1 region is supported. Please contact support@regattastorage.com if you need to use a different region.

Bold choice, given what I know about us-east-1

replies(3): >>42174625 #>>42175333 #>>42187812 #
28. huntaub ◴[] No.42174592{4}[source]
The tl;dr of this is -- yes. We have a durable caching layer that we use to stage writes before we asynchronously replicate them to S3. This means that we are able to quickly (<1ms) perform operations like single-byte updates and renames and provide strong read-after-write consistency to other file system clients.

Once the operation is stored in our durable cache, then we update your S3 bucket to match what the file system expects. This generally takes around a minute, but could take longer depending on the number of S3 operations a file operation translates to (for example, a directory rename requires that CopyObject each object in the directory in S3).

I think that the POSIX API is to here to stay (like the S3 API). I agree that it would be better to have timeouts and deadlines, but I don't think that those make it impossible to provide a good distributed file system experience on POSIX (look at Amazon's EFS, Oracle's FSS, Google's FileStore, etc). It just makes the bar for availability higher.

29. huntaub ◴[] No.42174606{4}[source]
This is exactly the problem that we solve! You don't need any local disk on your EC2 instance in order to use Regatta or work with data in S3. Our high-speed caching layer plays the role as this local disk for you, so that you can work with data sets that are hundreds of TiBs, even if you only have a 20 GiB EBS volume on your instance.
replies(1): >>42175000 #
30. mbreese ◴[] No.42174615{3}[source]
I think it’s more analogous to Minio’s discontinued proxy mode. This is where you’d talk to minio locally (using whatever interface/protocol) and it would act as a local cage for S3 objects. If you wrote to it, it would propagate the changes up to S3 proper (or whomever using the S3 protocol).

I believe they stopped supporting that mode because they didn’t want to keep chasing every S3 protocol change. However, if you’re just using S3, and not trying to masquerade as S3, this problem becomes easier.

31. huntaub ◴[] No.42174625[source]
:sunglasses: We think it's important to be where our customers are, and we're looking to prioritize the next regions that we launch in based on customer demand. We expect to be in more regions by the end of the year.
32. ewuhic ◴[] No.42174683[source]
How does it handle data append and file editing?
replies(1): >>42174706 #
33. memset ◴[] No.42174697[source]
This is honestly the coolest thing I've seen coming out of YC in years. I have a bunch of questions which are basically related to "how does it work" and please pardon me if my questions are silly or naive!

1. If I had a local disk which was 10 GB, what happens when I try to contend with data in the 50 GB range (as in, more that could be cached locally?) Would I immediately see degradation, or thrashing, at the 10 GB mark?

2. Does this only work in practice on AWS instances? As in, I could run it on a different cloud, but in practice we only really get fast speeds due to running everything within AWS?

3. I've always had trouble with FUSE in different kinds of docker environments. And it looks like you're using both FUSE and NFS mounts. How does all of that work?

4. Is the idea that I could literally run Clickhouse or Postgres with a regatta volume as the backing store?

5. I have to ask - how do you think about open source here?

6. Can I mount on multiple servers? What are the limits there? (ie, a lambda function.)

I haven't played with the so maybe doing so would help answer questions. But I'm really excited about this! I have tried using EFS for small projects in the past but - and maybe I was holding it wrong - I could not for the life of me figure out what I needed to get faster bandwidth, probably because I didn't know how to turn the knobs correctly.

replies(1): >>42174791 #
34. huntaub ◴[] No.42174706[source]
Thanks for the question. We stage writes to a durable, shared caching layer. This allows us to respond quickly to your application when it performs these operations (<1ms), but then asynchronously send those operations to S3 later. When connecting through Regatta, all file system clients see a strongly consistent read-after-write view of the changes on the file system, even if they haven't yet propagated to S3.
35. mmastrac ◴[] No.42174733[source]
I have a few qualms with this app:

1. For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.

... I'm kidding, this is quite useful.

I really wish that NFSv3 and Linux had built-in file hashing ioctls that could delegate some of this expensive work to the backend as it would make it much easier to use something like this as a backup accelerator.

replies(1): >>42174811 #
36. TripleChecker ◴[] No.42174735{3}[source]
Was about to say the same, plus some typos in the documentation section (see here: https://triplechecker.com/s/743384/docs.regattastorage.com)
replies(1): >>42174800 #
37. Shoop ◴[] No.42174736{3}[source]
What are the consistency semantics?
replies(1): >>42174943 #
38. huntaub ◴[] No.42174791[source]
Wow, thanks for the nice note! No questions are silly, and I'll also note that we now have a docs site (https://docs.regattastorage.com) and feel free to email me (hleath [at] regattastorage.com) if I don't fully address your questions.

> If I had a local disk which was 10 GB, what happens when I try to contend with data in the 50 GB range (as in, more that could be cached locally?) Would I immediately see degradation, or thrashing, at the 10 GB mark?

We don't actually do caching on your instance's disk. Instead, data is cached in the Linux page cache (in memory) like a regular hard drive, and Regatta provides a durable, shared cache that automatically expands with the working set size of your application. For example, if you were trying to work with data in the 50 GiB range, Regatta would automatically cache all 50 GiB -- allowing you to access it with sub-millisecond latency.

> Does this only work in practice on AWS instances? As in, I could run it on a different cloud, but in practice we only really get fast speeds due to running everything within AWS?

For now, yes -- the speed is highly dependent on latency -- which is highly dependent on distance between your instance and Regatta. Today, we are only in AWS, but we are looking to launch in other clouds by the end of the year. Shoot me an email if there's somewhere specifically that you're interested in.

> I've always had trouble with FUSE in different kinds of docker environments. And it looks like you're using both FUSE and NFS mounts. How does all of that work?

There are a couple of different questions bundled together in this. Today, Regatta exposes an NFSv3 file system that you can mount. We are working on a new protocol which will be mounted via FUSE. However, in Docker environments, we also provide a CSI driver (for use with K8s) and a Docker volume plugin (for use with just Docker) that handles the mounting for you. We haven't released these publicly yet, so shoot me an email if you want early access.

> Is the idea that I could literally run Clickhouse or Postgres with a regatta volume as the backing store?

Yes, you should be able to run a database on Regatta.

> I have to ask - how do you think about open source here?

We are in the process of open sourcing all of the client code (CSI driver, mount helper, FUSE), but we don't have plans currently to open source the server code. We see the value of Regatta in managing the infrastructure so you don't have to, and if we release it via open-source, it would be difficult to run on your own.

> Can I mount on multiple servers? What are the limits there? (ie, a lambda function.)

Yes, you can mount on multiple servers simultaneously! We haven't specifically stress-tested the number of clients we support, but we should be good for O(100s) of mounts. Unfortunately, AWS locks down Lambda so we can't mount arbitrary file systems in that environment specifically.

> efs performance

Yes, the challenge here is specifically around the semantics of NFS itself and the latency of the EFS service. We think we have a path to solving both of these in the next month or two.

replies(4): >>42175084 #>>42177548 #>>42179172 #>>42179460 #
39. huntaub ◴[] No.42174800{4}[source]
Thank you -- this seems like a fantastic service. You see I consistently miss that "h" in synchronize.
replies(1): >>42176985 #
40. oops ◴[] No.42174808[source]
Congrats on the launch!

Could a Regatta filesystem offer any advantage over ClickHouse's built-in S3 and local disk caching features in terms of cost or performance?

replies(1): >>42174835 #
41. huntaub ◴[] No.42174811[source]
Ha, thank you for the FTP comment, I was hoping someone would make it.

> I really wish that NFSv3 and Linux had built-in file hashing ioctls that could delegate some of this expensive work to the backend as it would make it much easier to use something like this as a backup accelerator.

Tell me a bit more about what you mean here. We're interested in really pushing the limits of what a storage system can do, so I'd be potentially interested.

42. bassp ◴[] No.42174813[source]
This feels, intuitively, like it would be very hard to make crash consistent (given the durable caching layer in between the client and S3). How are you approaching that?
replies(1): >>42174848 #
43. count ◴[] No.42174818[source]
I don't see any other question about it, so maybe I just missed the obvious answer, but how do you handle POSIX ACLs? If the data is stored as an object in S3, but exposed via filesystem, where are you keeping (if at all?) the filesystem ACLs and metadata?

Also, NFSv3 and not 4?

replies(1): >>42174874 #
44. cluckindan ◴[] No.42174821[source]
How does this compare to S3 compatible CSI drivers like DirectPV?
replies(1): >>42174938 #
45. huntaub ◴[] No.42174835[source]
It can offer an advantage over the built-in caching, but it depends on your exact access patterns. For example, if you are running ClickHouse on multiple servers and accessing the same reference data, it's more efficient to cache that data in a centralized location (like Regatta) instead of on the disk of each individual instance.

Philosophically, our goal is to build a standard that can be used in these kinds of applications moving forward, so that application developers don't need to build streaming over and over again and users don't need to learn how to configure each individual systems' caching.

46. bithive123 ◴[] No.42174836[source]
How does this compare to Amazon's own offering in this space, the "AWS Storage Gateway"? It can also back various storage protocols with S3, using SSDs for cache, etc. (https://aws.amazon.com/storagegateway/features/)
replies(1): >>42174924 #
47. huntaub ◴[] No.42174848[source]
It depends on what you mean by crash-consistent. I would expect that we handle crash-consistency at the client fine (since it is the same crash-consistency of NFSv3) and craash-consistency at the server also fine (since we are able to detect using etags what version of an object is in the backing data storage). Tell me a bit more about what you're thinking.
replies(1): >>42175193 #
48. mikecwang ◴[] No.42174866[source]
Does it mean I can use Lambda + SQLite + Regatta to build a real pay-as-you-go ACID SQL storage?

Edit: an production-ready (high durability) ACID SQL storage

replies(2): >>42174881 #>>42175822 #
49. huntaub ◴[] No.42174874[source]
Great call out. Some kinds of data, like ACLs and specific kinds of metadata, don't live in S3. Full disclosure, we don't support ACLs today (but plan to soon). We keep file system metadata in the durable cache. For some files (where users haven't changed permissions, etc), we are able to release that cached metadata when the file is no longer in use. For other files (where permissions have been changed by the user), that metadata must live in the cache long-term.

We selected NFSv3 due to it's broad compatibility with different compute environments. For example, Windows has an NFSv3 client in it, but doesn't have an NFSv4 client. There are lots of enterprise workloads which needs simultaneous access to file data from both Windows and Linux, and supporting NFSv3 was the easiest path to support those workloads.

replies(2): >>42175271 #>>42177171 #
50. huntaub ◴[] No.42174881[source]
Yes! This is my expectation. Lots of the big companies have already done this with in-house architecture. With Regatta, we want to democratize building stateless applications that can take advantage of the low-cost storage of S3.
replies(2): >>42174921 #>>42181154 #
51. frakkingcylons ◴[] No.42174886[source]
How does this compare to the log structured virtual disk concept from this paper? It sounds quite similar at a glance.

https://dl.acm.org/doi/10.1145/3492321.3524271

replies(1): >>42175085 #
52. mikecwang ◴[] No.42174921{3}[source]
That's some real tech in YC these days!
53. huntaub ◴[] No.42174924[source]
Great question! We fill the same role as AWS Storage Gateway (and I used to work closely with that team when I was at AWS, lots of respect for what they do). AWS Storage Gateway is built primarily as an appliance to be installed on instances in your own data center to ease migration to the cloud. Many customers do deploy Storage Gateway on EC2 because they want these features in the cloud itself. However, the "appliance" design of Storage Gateway makes it unsuitable for this purpose. For example, Storage Gateway is not designed to run in a cluster for high-availability and doesn't have access to durable, long-term storage to stage and cache writes.

On the other hand, Regatta is designed as a cloud-native gateway product. Regatta's elastic, durable caching layer allows us to efficiently cache large data sets without thrashing, and always efficiently perform writes. Because Regatta is designed to be highly-available, customers don't have to worry about downtime for patching or deployments.

replies(1): >>42175521 #
54. renewiltord ◴[] No.42174933[source]
Fascinating. If this had been around a year ago, we could have used it in our datacenter build-out. For data source reasons, we record data in the cloud. In the past, we'd stick most of the data in S3 and only egress what we needed to run analysis on. The way we'd do that is that we have a machine with 16 * 30 TiB SSDs that acts as our on-prem cache of our S3 data. It did this using a slightly modified goofys with a more modified catfs in front of it, with both the cache and the catfs view exported over NFSv4. We had application-level switching between the cache and the export since our data was really read-only.

When the cache got full, catfs would evict things from it pretty simply. It's overall got a good design but has a few bugs you have to fix, and when you have 100 machines connecting to it, it requires some tuning to make sure that it doesn't all stall. But it worked for the most part.

Anyway, I think this is cool tech. I'm currently doing some bioinformatics stuff that this might help with (each genome sequence is some 100 GiB compressed). I'll give it a shot some time in the next couple of months.

replies(1): >>42175063 #
55. paulgb ◴[] No.42174934{3}[source]
Congrats on the launch, this is really cool! Is the durable cache an attached disk, or are you using a separate AWS product for that?
replies(1): >>42174959 #
56. huntaub ◴[] No.42174938[source]
I could totally be misreading DirectPV, but it appears to be a way to use K8s Persistent Volumes to manage things like NVME drives which are attached to each node, and doesn't provide any tie in to S3 (outside of the fact that it's built to power MinIO).
57. huntaub ◴[] No.42174943{4}[source]
All connected file system clients see strong, read-after-write consistency. Most file operations are synchronized to S3 within a few minutes of completion.
replies(1): >>42175823 #
58. huntaub ◴[] No.42174959{4}[source]
Without getting too much into the details of the system, our durable cache is designed for 5 9s of durability (and we're working on a version that will provide 11 9s of durability soon). You can't achieve those durability numbers on a single attached NVMe device without some kind of replication.
59. Jayakumark ◴[] No.42175000{5}[source]
What is the acceptable latency , if we have to use this outside of Ec2 , lets say mounting S3 from on-prem/GCP/Azure ?
replies(1): >>42175691 #
60. zitterbewegung ◴[] No.42175041[source]
I know that Amazon in general has large ingress and egress cost how much overhead will this application incur?
replies(1): >>42175046 #
61. huntaub ◴[] No.42175046[source]
Those costs only apply to data transfer into and out of AWS. If you're running EC2 instances in AWS, your Regatta file system is in AWS, and your S3 bucket is in AWS -- then you shouldn't incur additional data transfer fees.
replies(1): >>42175151 #
62. huntaub ◴[] No.42175063[source]
That's exactly the kind of thing that I've been hearing lots of teams having to solve individually, and I'm glad that this set up worked out for you. Would love to see you try it for bioinformatics (another industry where this problem seems to show up frequently), feel free to reach out with any questions when you start that.
63. mritchie712 ◴[] No.42175067[source]
Pretty sure we're in your target market. We [0] currently use GCP Filestore to host DuckDB. Here's the pricing and performance at 10 TiB. Can you give me an idea on the pricing and performance for Regatta?

Service Tier: Zonal

Location: us-central1

10 TiB instance at $0.35/TiB/hr

Monthly cost: $2,560.00

Performance Estimate:

Read IOPS: 92,000

Write IOPS: 26,000

Read Throughput: 2,600 MiB/s

Write Throughput: 880 MiB/s

0 - https://www.definite.app/blog/duckdb-datawarehouse

replies(2): >>42175238 #>>42175360 #
64. memset ◴[] No.42175084{3}[source]
Thank you for the detailed answers! Honestly, this project inspires me to work on infrastructure problems.

So you are saying that regatta's own SaaS infrastructure provides the disk caching layer. So you all make sure the pipe between my AWS instance and your servers are very fast and "infinitely scalable", and then the sync to S3 happens after the fact.

replies(1): >>42175112 #
65. huntaub ◴[] No.42175085[source]
One of the fun parts about working on storage and file systems in particular, is that these techniques are old as time. Log-structured writes, journals, caching, etc -- are all non-novel. However, the benefit to our customers is in how easy we make it for them to use something like this without having to deploy or build it themselves.
66. huntaub ◴[] No.42175112{4}[source]
That's exactly right!
replies(1): >>42183374 #
67. discodave ◴[] No.42175151{3}[source]
Where you say AWS, you mean "a single AWS region"

But anyway, from your YCombinator blurb:

    "When you’re done editing data, it automatically flows back to S3 within a few minutes"
Does this mean Regatta trades consistency for cost (S3 and EBS and local storage are all CP systems these days)?
replies(2): >>42175185 #>>42175282 #
68. huntaub ◴[] No.42175185{4}[source]
Yes, that's correct re: Region -- thanks for the clarification.

In some sense, yes. But, the consistency that you're trading is only for accessing data simultaneously through the file interface and the S3 interface simultaneously. The consistent is CP/strong when you access the data through the file interface. The model that we see most often work is folks will ingest data through S3 (for example, an 'input/' prefix), and then the file system will process that data and place it in a different directory (for example, an 'output/' folder). Then, if it takes a minute or two for those to update on the other side, it's not a big deal.

69. bassp ◴[] No.42175193{3}[source]
For sure! Upon reflection, maybe I’m less curious about crash consistency (corruption or whatever) per-se, and more about what kinds of durability guarantees I can expect in the presence of a crash.

I’m specifically interested in how you’re handling synchronization between the NFS layer and S3 wrt fsync. The description says that data is “asynchronously” written back out to S3. That implies to me that it’s possible for something like this to happen:

1. I write to a file and fsync it

2. Your NFS layer makes the file durable and returns

3. Your NFS layer crashes (oh no, the intern merged some bad terraform!) before it writes back to S3

4. I go to read the file from S3… and it’s not there!

Is that possible? IE is the only way to get a consistent view of the data by reading “through” the nfs layer, even if I fsync?

replies(1): >>42175348 #
70. jitl ◴[] No.42175213[source]
I’m very interested in this as a backing disk for SQLite/DuckDB/parquet, but I really want my cached reads to come straight from instance-local NVMe storage, and to have a way to “pin” and “unpin” some subdirectories from local cache.

Why local storage? We’re going to have multiple processes reading & writing to the files and need locking & shared memory semantics you can’t get w/ NFS. I could implement pin/unpin myself in user space by copying stuff between /mnt/magic-nfs and /mnt/instance-nvme but at that point I’d just use S3 myself.

Any thoughts about providing a custom file system or how to assemble this out of parts on top of the NFS mount?

replies(1): >>42175408 #
71. austinpena ◴[] No.42175229[source]
Reminds me of https://www.lucidlink.com/ for video editors. I quite like the experience with them.
replies(1): >>42175295 #
72. weinzierl ◴[] No.42175235[source]
The title says POSIX but then it talks about NFS. So, what is it? Does it guarantee all POSIX semantics or not?
replies(1): >>42175277 #
73. huntaub ◴[] No.42175238[source]
Yes, you should be in our target market. I don't think that I can give a cost estimate without having a good sense of what percentage of your data you're actively using at any given time, but we should absolutely support the performance numbers that you're talking about. I'd love to chat more in detail, feel free to send me a note at hleath [at] regattastorage.com.
replies(1): >>42175518 #
74. kiririn ◴[] No.42175244[source]
How does this differ from rclone mount and its vfs/caching system, possibly combined with mergerfs or rclone union for cache tiering?
replies(1): >>42175331 #
75. count ◴[] No.42175271{3}[source]
Thanks, I keep hoping someone comes up with some magic :)

Is the intent to run this in-vpc?

And how do you differentiate from AWS Storage Gateway?

replies(1): >>42175368 #
76. huntaub ◴[] No.42175277[source]
You are correct in that NFS is not strictly-speaking POSIX compliant to the letter of the law, due to the caching behavior. This is an NFSv3 file system, so it shares those semantics. The point that I'm trying to emphasize is that the file system supports standard file operations which aren't possible through other FUSE adapters, or possible to perform efficiently on S3 (such as append, rename, and symbolic links) -- which provides broad compatibility with file-based applications.
replies(1): >>42175728 #
77. ec109685 ◴[] No.42175282{4}[source]
It async replicates to s3, while providing a consistent interface to storage clients.
78. huntaub ◴[] No.42175295[source]
That's exactly right, I've spoken with a ton of folks who have had a good experience with Lucid Link. I think that we are in a slightly different part of the market (in that we aren't targeting video editors, and more of data-intensive applications which may use thousands of IOPS), but I appreciate that the technology is likely similar.
79. huntaub ◴[] No.42175331[source]
Yes, you can absolutely get similar functionality with rclone. However, what we are solving for our customers is the ability to do this without thinking about infrastructure or deployments. Customers don't need to worry about data durability, replication, recovering off of failed drives, or availability through deployments or patches.
80. ec109685 ◴[] No.42175333[source]
Total curiosity, but what’s the limiting factor of scaling out to multiple regions day one.
replies(1): >>42175429 #
81. huntaub ◴[] No.42175348{4}[source]
So, the step that differs from your concern is Step 3. Let's say that we have a catastrophic availability scenario (as you said, intern comes in and tears down something) -- our job is to make sure that the data in our durable cache remains there (and to put safeguards in place to prevent the intern from hitting that data). If we do that, then any crash of our system will get the data back and be able to apply it to S3. I know that's kind of hand-wavy, but this is how things like AWS S3 work -- just having a super high bar for processes around operations to keep data safe.
replies(2): >>42175401 #>>42175421 #
82. _bare_metal ◴[] No.42175360[source]
Out of curiosity, why not go bare metal in a managed colocation? Is that for the geographic spread? Or unpredictable load?

Every few months of this spend is like buying a server

Edit: back at my pc and checked, relevant bare metal is ~$500/m, amortized:

https://baremetalsavings.com/c/LtxKMNj

Edit 2: for 100tb..

replies(3): >>42175534 #>>42176875 #>>42189500 #
83. huntaub ◴[] No.42175368{4}[source]
I'd love to hear more about what you're excited to do when the magic arrives. :D

We are running it as a managed SaaS, so our customers connect to the caching layer that runs in the Regatta VPC. This allows us to manage the infrastructure for them and keep costs low.

Storage Gateway is an interesting product, and I worked closely with that team for several years -- so mad respect for them. It was designed to be an appliance that you run on servers in your own data center (of course, many customers now deploy it to EC2). Because of this, it's designed to operate in an environment with "finite storage" -- for example, different workload pattterns can thrash the cache, which results in poor performance to clients, and it's not designed to run in a high-availability cluster in the cloud. Regatta solves these problems with durable cache storage that's safe to data in long-term, and is designed for high-availability.

84. debarshri ◴[] No.42175379[source]
There are quite some noteworthy alternatives like s3fs, rclone, goofys etc.
replies(1): >>42175395 #
85. huntaub ◴[] No.42175395[source]
This is accurate! A lot of people have spent a lot of time trying to build a good file system abstraction on cheap, S3 storage. However, Regatta differs from these solutions in two important ways. First, Regatta is a shared, durable caching layer that sits between your instances and S3. This means that Regatta is able to efficiently perform operations (like directory renames) and provide strong consistency to other file system clients (whereas s3fs and other FUSE file systems would need to actually perform those operations in S3 for other clients to see the output). Secondly, Regatta is designed to support all file system operations. This means that you can do file locking, random writes, appends, and renames -- even when they aren't efficient to perform on S3.
replies(1): >>42175570 #
86. bassp ◴[] No.42175401{5}[source]
Gotcha! Thanks for the answer; so the tl;dr is, if I’m understanding:

“All fsync-ed writes will eventually make it to S3, but fsync successfully returning only guarantees that writes are durable in our NFS caching layer, not in the S3 layer”?

87. huntaub ◴[] No.42175408[source]
Hey -- I think this is something that's in-scope for our custom protocol that we're working on. I'd love to chat more about your needs to make sure that we build something that will work great for you. Would you mind shooting an email to hleath [at] regattastorage.com and we can chat more?
replies(1): >>42184410 #
88. huntaub ◴[] No.42175421{5}[source]
For some reason, I don't see a "reply" button to your later comment (maybe there's an HN threading limit), but the answer is yes -- fsync guarantees durability in the Regatta durable cache, not in S3.
89. huntaub ◴[] No.42175429{3}[source]
Time! We don't have a lot of people right now, so every minute that we spend launching infrastructure (especially in non-AWS clouds) is a minute that we can't spend on performance improvements for our customers.
90. eerikkivistik ◴[] No.42175443[source]
FlexFS kicks ass. I benchmarked it for our data storage and processing layers in value.space (satellite data processing and analysis) and we will most likely migrate to FlexFS in the near future.

Out of curiosity, why did you choose EFS, it's insanely expensive at even modest scales?

91. doctorpangloss ◴[] No.42175477[source]
How does this differ from AWS Storage Gateway?
replies(1): >>42175484 #
92. huntaub ◴[] No.42175484[source]
(full disclosure, reposted from a comment below)

Great question! We fill the same role as AWS Storage Gateway (and I used to work closely with that team when I was at AWS, lots of respect for what they do). AWS Storage Gateway is built primarily as an appliance to be installed on instances in your own data center to ease migration to the cloud. Many customers do deploy Storage Gateway on EC2 because they want these features in the cloud itself. However, the "appliance" design of Storage Gateway makes it unsuitable for this purpose. For example, Storage Gateway is not designed to run in a cluster for high-availability and doesn't have access to durable, long-term storage to stage and cache writes.

On the other hand, Regatta is designed as a cloud-native gateway product. Regatta's elastic, durable caching layer allows us to efficiently cache large data sets without thrashing, and always efficiently perform writes. Because Regatta is designed to be highly-available, customers don't have to worry about downtime for patching or deployments.

93. bcardarella ◴[] No.42175507[source]
Why are you guys hijacking the scroll bar on your website?
replies(1): >>42175511 #
94. eerikkivistik ◴[] No.42175510[source]
Can you elaborate on a few things with regards to your pricing:

* What does "$0.05 / gigabyte transferred" mean exactly. Transferred outside of AWS or accessed as in read and written data?

* "$0.20/GiB-mo of high-speed cache" – how is the high-speed cache amount computed?

replies(1): >>42175529 #
95. huntaub ◴[] No.42175511[source]
Just the theme that we ended up using for the marketing site. We will likely build something less janky post-batch, but right now -- just trying to get the information out there.
96. mritchie712 ◴[] No.42175518{3}[source]
I'll send you a note!

Found this in the docs:

> By default, Regatta file systems can provide up to 10 Gbps of throughput and 10,000 IOPS across all connected clients.

Is that the lower bound? The 50 TiB filestore instance has 104 Gbps read through put (albeit at a relatively high price point).

replies(1): >>42175541 #
97. doctorpangloss ◴[] No.42175521{3}[source]
S3 File Gateway sounds a lot like your product.
replies(1): >>42175552 #
98. huntaub ◴[] No.42175529[source]
Sure, and we have more details on pricing here which may answer your questions: https://docs.regattastorage.com/details/pricing

We need to update the home page with these details, but $0.05 is only charged on transfer between Regatta and S3. We calculate your cache usage minutely and tally it into a monthly usage amount that we then bill for.

replies(1): >>42175583 #
99. mritchie712 ◴[] No.42175534{3}[source]
agreed, one month of 50 TiB is $12,800!

we're using Filestore out of convenience right now, but actively exploring alternatives.

100. huntaub ◴[] No.42175541{4}[source]
That's just the limit that we apply to new file systems. We should be able to support your 104 Gbps of read throughput.
101. huntaub ◴[] No.42175552{4}[source]
Also true! If you look at their site, they're really targeting folks to deploy it into their data centers to provide on-premises caching of resources in AWS, rather than providing a high-speed cache within AWS for file-based applications.

https://aws.amazon.com/storagegateway/file/s3/

102. aidos ◴[] No.42175570{3}[source]
Super interesting product. I have a couple of questions:

In terms of storing in s3 - is that in your buckets? Sound like the plan is to run the caching on your infrastructure, are there plans to allow customers to run those instances themselves?

Presumably the format within s3 is your own bespoke format? What does the migration strategy look like for people looking to move into or out of your infrastructure? They effectively pull everything down from their s3 to the local “filesystem”?

replies(1): >>42175591 #
103. eerikkivistik ◴[] No.42175583{3}[source]
Thanks for clearing that up. Few followup questions:

You don't actually directly charge for storage itself, so I assume this a "bring your own s3 bucket" type of deal, correct?

How long does data, that is no longer being accessed sit in the cache and count towards billing?

As for availability, are you in the process or do you have plans to also support Google Cloud?

replies(1): >>42175607 #
104. huntaub ◴[] No.42175591{4}[source]
I love this because it allows me to highlight the parts of the system that I'm most excited about. The Regatta caching runs on our infrastructure, but it connects to buckets that our customers control. We read and write data into the customer's bucket in a regular, native (not bespoke) format -- so you can connect a Regatta file system directly to a bucket that already exists, with data in it, and use that data from a file system without any data migration!
replies(2): >>42176352 #>>42176644 #
105. huntaub ◴[] No.42175607{4}[source]
> You don't actually directly charge for storage itself, so I assume this a "bring your own s3 bucket" type of deal, correct?

That's correct -- we store data in the customer S3 bucket.

> How long does data, that is no longer being accessed sit in the cache and count towards billing?

We keep data in the cache for up to 1 hour after you've stopped accessing it.

> As for availability, are you in the process or do you have plans to also support Google Cloud?

We have plans to support Google Cloud. If you're interested in using us from GCP, I'd recommend setting up some time to chat (either use the website or email me at hleath [at] regattastorage.com). We are prioritizing where we launch our infrastructure next based on customer demand.

replies(2): >>42175630 #>>42187696 #
106. eerikkivistik ◴[] No.42175630{5}[source]
I might just take you up on that.
107. huntaub ◴[] No.42175691{6}[source]
Well, in my opinion, I want to deliver the lowest latency possible. I expect that we will have Regatta running in GCP and Azure within the next 6 months. I'd love to connect if there's a place on-prem that you're looking to use Regatta. Would you shoot an email to hleath [at] regattastorage.com, and we could chat about what you're looking for?
108. weinzierl ◴[] No.42175728{3}[source]
Which is nice and useful of course but there is ton of things that can't reliably be done with that (like running any database you that comes to mind) which makes it important to be precise here.
replies(1): >>42175820 #
109. huntaub ◴[] No.42175820{4}[source]
Is there something specific that you worry about when running a database on a networked file system? I would imagine that any database which is correctly fsync'ing the data to the write-ahead-log should work just fine.
replies(1): >>42189101 #
110. jedberg ◴[] No.42175822[source]
Curious as to why you would want to build that yourself when so many solutions already exist (Supabase, NeonDB, AWS Aurora or RDS, etc.)?
replies(1): >>42175971 #
111. bobnamob ◴[] No.42175823{5}[source]
Do you do anything to handle/detect write conflicts?
replies(1): >>42176030 #
112. kmclean ◴[] No.42175852[source]
Just want to say this is super cool. I'm excited to see what people build on top of it.. seems like it could enable a new category of hosted data platforms-as-a-service (platform-as-a-services?).
replies(1): >>42175954 #
113. inopinatus ◴[] No.42175870[source]
I rejected EFS as a common caching and shared files layer, despite being technologically an excellent fit for my stack, because it is astronomically expensive. The value created didn’t match the cost.

When I got in touch about that, I was confronted with a wall of TCO papers, which tells me the product managers evidently believe their target segment to be Gartner-following corporate drones. This was a further deterrent.

We threw that idea away and used memcached instead, with common static files in a package in S3.

I guess I’m suggesting, don’t be like EFS when it comes to pricing or reaching customers.

replies(1): >>42176102 #
114. koolba ◴[] No.42175879{3}[source]
Is it fair to say this is best suited for small files that will be written infrequently?

There’s no partial write for s3 so editing a small range of a 1 GiB file would repeatedly upload the full file to the backing s3 right?

Or is the s3 representation not the same hierarchy as the presented mount point? (ie something opaque like a log structured / append only chunked list)

replies(1): >>42176003 #
115. the_duke ◴[] No.42175912{3}[source]
So, I assume you use a journal in the cache server.

A few related questions:

* Do you use a single leader for a specific file system, or do you have a cluster solution with consensus to enable scaling/redundancy?

* How do you guarantee read-after-write consistency? Do you stream the journal to all clients and wait for them to ack before the write finishes? Or at least wait for everyone to ack the latest revisions for files, while the content is streamed out separately/requested on demand?

* If the above is true, I assume this is strictly viable for single-DC usage due to latency? Do you support different mount options for different consistency guarantees?

replies(1): >>42175986 #
116. huntaub ◴[] No.42175954[source]
This is more or less exactly what I'm hoping for. I think that people are excited to build stateless applications, but often that requires really specialized application and storage knowledge to pull off. My hope is that people can use this generic storage layer to build the next generation of stateless applications (including things like databases) without having to become storage experts themselves. I'm also excited to see what they build.
117. huntaub ◴[] No.42175971{3}[source]
One of my hopes for Regatta is that we're able to power the next generation of these data platforms. These things work because the designers had specialized storage knowledge that allowed them to carefully build serverless data products. I hope that Regatta is generic enough to allow anyone to build a serverless data product moving forward, without having to think about their storage infrastructure.
replies(1): >>42177203 #
118. huntaub ◴[] No.42175986{4}[source]
These are questions that are super specific to our implementation, that I'm hesitant to share publicly because they could change any at any time. I can share that we're designed to horizontally scale the performance of each file system, and our custom protocol will enable Lustre-like scale out performance. As for single- vs. multi-DC, I think that you'd be surprised at how much latency budget there is (a cross-DC round trip in AWS can be anywhere from 200us-700us, and EBS gp3 latencies are around 1000us).
119. yawnxyz ◴[] No.42176002[source]
oh interesting, I'd love to mount this to Finder on Mac, and load a bunch of massive bioinformatics databases on there and treat it like another folder

I'm also using Cloudflare R2 (S3 compatible) and would love for that to work out of the box

replies(2): >>42176017 #>>42176113 #
120. huntaub ◴[] No.42176003{4}[source]
It's hard to define "best", and in many cases, the answers to these questions depend heavily on the workload and the caching parameters (how long do we wait before flushing to S3, etc). We are designed to provide good file system performance, even if customers are repeatedly writing small pieces of data to a 1 GiB file, so "best" in this case is a question of whether or not it's cost efficient.
121. huntaub ◴[] No.42176017[source]
I know a lot of folks have asked me for local support, and while I can share that this would work from OS X -- it's not something that I would recommend doing outside of a data center because the semantics of a networked file system on a sporadic internet connection (when compared to a data center) aren't great -- unless you're doing something higher level like Dropbox. However, it's something we're considering for next year.
122. huntaub ◴[] No.42176030{6}[source]
Write conflicts between the file system and S3 should be rare (by definition, applications shouldn't yet be designed to do this because Regatta doesn't exist). We do some tracking of the object etag to at least throw an alert if we find that something unexpected has happened, and we're looking at the best UX to expose that to customers soon.
123. huntaub ◴[] No.42176102[source]
It's certainly my hope to be cost effective, but I understand the worry and I'm sorry that you had that experience with the PMs of that time. At the end of the day, I see my target customers as those who aren't interested in running their own infrastructure and having to manage availability and durability (in memcached case, things like needing to pre-warm the cache). I understand that it still may be possible to be more cost effective if you're willing to trade off ease of use for dealing with those other concerns.
124. JZL003 ◴[] No.42176113[source]
You can use rclone mount, depends on how much you're flipping through files or actually doing lots of IO

I wouldn't want to host fastqs or something and use this for alignment, but for spot checking raw fastqs it could be nice

replies(1): >>42176192 #
125. therealmarv ◴[] No.42176192{3}[source]
This reminds me on using rclone mount on Terrabytes of data and I mostly wanted some "smaller" files between 200kb-1.5MB in a single directory. I made rclone mount significantly faster when rclone mount caches into a Ramdisk (there is a free tool to make Ramdisks on macOS too).
126. hitekker ◴[] No.42176227[source]
This looks quite compelling.

But it's not clear how it handles file update conflicts. For example: if User A updates File X on one computer, and User B updates File X on another computer, what does the final file look like in S3?

replies(1): >>42176263 #
127. huntaub ◴[] No.42176263[source]
Hey there, our file system is strongly consistent for all connected file system clients. For example, if User A and User B are both connected via Regatta, then this works like any other NFS file system (in that they can use file locks, atomic renames or other techniques to ensure that one write wins). However, if User A and User B are accessing the data through different protocols (for example User A is using Regatta and User B is accessing the data through S3), then it's possible to get undefined behavior by attempting to simultaneously update the same piece of data from both places. We think that these applications are rare, and (almost by definition) likely don't exist right now. For the most part, customers use file storage as a "stage" in a broader workflow (for example, customers may ingest data through S3 and then process it on a file system), and that is totally consistent.
128. osigurdson ◴[] No.42176279[source]
If using EFS already, how would the pricing / performance compare? Or is that maybe not a use case for regatta storage?
replies(1): >>42176345 #
129. whinvik ◴[] No.42176321[source]
I am not your target audience but I have been thinking of building a very minified version of this using [0] Pooch and [1] S3FS.

Right now we spend a lot of time downloading various stuff from HTTP or S3 links and then figuring out folder structures to keep them in our S3 buckets. Pooch really simplifies the caching for this by having a deterministic path on your local storage for downloaded files, but has no S3 backend.

So a combination of 2 would be to just have 1 call to a link that would embed the caching both locally and on our S3 buckets deterministically.

[0] https://www.fatiando.org/pooch/latest/ [1] https://s3fs.readthedocs.io/en/latest/

replies(1): >>42176384 #
130. huntaub ◴[] No.42176345[source]
It depends on what you're doing with EFS! For the most part, I would expect to be lower cost than EFS. If you're doing where individual files are primarily written or accessed from an individual instance, I would expect a significant improvement in performance. If you have some time, I'd love to chat more deeply about what you're doing. Feel free to grab some time on my calendar from the Demo link on the Regatta home page or shoot me an email at hleath [at] regattastorage.com.
131. ◴[] No.42176352{5}[source]
132. huntaub ◴[] No.42176384[source]
I think this is a great insight, and something that I think about often. The challenge that I see is that the scientist archetype (whether it's data science, AI researcher, or anything else) isn't really interested in doing software development for these kinds of things. They just want the data to be there, and it's super nice to be able to click through the S3 console to be able to see and share the data their using. I think that what you're doing is a great idea for folks who are accessing their data primarily through Python programs!
133. ragulpr ◴[] No.42176558[source]
Love this idea! Biggest hurdle though have been to have predictable Auth&IO across multiple Python/Scala versions and all other things (Spark, orchestrators, CLI's of teams of varying types of OS etc etc) add to that access logs.

SF3s/boto/botocore versions x Scala/Spark x parquet x iceberg x k8s etc readers own assumptions makes reading from S3 alone a maintenance and compatibility nightmare.

Will the mounted system _really_ be accessible as local fs and seen as such to all running processes? No surprises? No need for python specific filesystem like S3Fs?

If so then you will win 100% I wouldn't even care about speed/cost if it's up to par with s3

replies(1): >>42176673 #
134. nwgo ◴[] No.42176608[source]
Is there any open source alternative to something like this?
replies(1): >>42176701 #
135. geophile ◴[] No.42176640[source]
How does this differ from what Nasuni offers?
replies(1): >>42176727 #
136. aidos ◴[] No.42176644{5}[source]
Oh interesting! So you map exactly to the structure in s3? It’s like fuse backed by s3 with good performance?
replies(1): >>42177662 #
137. huntaub ◴[] No.42176673[source]
Yeah, that's exactly right. I had some... experiences with Spark recently, that convinced me that this is something that could really help. I also really like the idea that organizations can continue to use S3 as the source of truth for their data (as you mention, it means that you can continue to use Access Logs, which would capture all usage of your S3 bucket across your applications).

> Will the mounted system _really_ be accessible as local fs and seen as such to all running processes? No surprises? No need for python specific filesystem like S3Fs?

Ha, well it depends on what you mean by surprises. We won't have a Python-specific file system. Our client is going to come in two flavors. Today, you can mount Regatta over NFSv3 (which we wrap in TLS to make it secure). This works for some workloads, but doesn't provide like-for-like performance with EBS. Over the next month, we plan to release the "custom protocol" that I wrote about above, that we expect to send to customers in the form of a FUSE file system.

Either way, it should be one package, you shouldn't need to worry about versioning, and it will appear as a real, local file system. :D

138. garganzol ◴[] No.42176699[source]
I used the same approach based on Rclone for a long time. I wondered what makes Regatta Storage different than Rclone. Here is the answer: "When performing mutating operations on the file system (including writes, renames, and directory changes), Regatta first stages this data on its high-speed caching layer to provide strong consistency to other file clients." [0].

Rclone, on the contrary, has no layer that would guarantee consistency among parallel clients.

[0] https://docs.regattastorage.com/details/architecture#overvie...

replies(4): >>42176790 #>>42177002 #>>42178556 #>>42182970 #
139. huntaub ◴[] No.42176701[source]
Hey, thanks for asking. It very much depends on which aspect of Regatta you're interested in using. I know of a couple of different architectures -- some folks wrote in "rclone" in the thread, I know of people using SeaweedFS if you want to host storage infrastructure yourself, etc.

I'd love to know a bit more about why you're looking for an open source alternative. Is it because of costs (i.e. you'd like an open source alternative that doesn't require you to pay) or if it's because of the operating environment (i.e. you want an open source alternative so that you can deploy it to your own infrastructure)? There are some things that we are exploring around deploying onto your own infrastructure over the next 12 months, but I'd love to learn more. Feel free to respond here or email me at hleath [at] regattastorage.com.

140. huntaub ◴[] No.42176727[source]
Hey there, I have mutual friends with some of the Nasuni folks, and I have a lot of respect for what they do. In particular, Nasuni stores data in a proprietary block format in your S3 bucket, so you can't connect it to existing data sets or use that data directly from S3 out the other side. Whereas with Regatta, we store data in its native format in S3 so you can do these things.

What's cool about the storage market is that there are so many impressive companies because there are so many varied needs from customer applications! We're hoping to become a simple "default" for teams who are writing applications in the cloud.

141. convivialdingo ◴[] No.42176750[source]
Wow, looks like a great product! That's a great idea to use NFS as the protocol. I honestly hadn't thought of that.

Perfect.

For IBM, I wrote a crypto filesystem that works similarly in concept, except it was a kernel filesystem. We crypto split the blocks up into 4 parts, stored into cache. A background daemon listened to events and sync'ed blocks to S3 orchestrated with a shared journal.

It's pure magic when you mount a filesystem on clean machine and all your data is "just there."

replies(1): >>42176804 #
142. random3 ◴[] No.42176756[source]
In (March?) 2007 (correction 2008) myself and two other engineers in front of Bruce Chizen - Adobe's CEO in a small conference room in Bucharest demoed a photo taken with an iPhone automagically showing as a file on a Mac. I implemented the local FUSE talking to Ozzy - Adobe's distributed object store back then, using an equivalent of a Linux inode structure. It worked like a charm and if I remember correctly it took us a few days to build it. It was a success just as much as Adobe's later choices around http://Photoshop.com were a huge failure. A few months later Dropbox launched.

That kickstarted about a decade in (actual) research and development led by my team which positioned the Bucharest center as one of the most prolific centers in distributed systems within Adobe and of Adobe within Romania.

But I didn't come up with the concept, it was Richard Jones that inspired us with the Gmail drive that used FUSE with gmail attachments back in 2004 when I got my first while still in college https://en.wikipedia.org/wiki/GMail_Drive. I guess I'm old, but I find it funny to see Launch HN: Regatta Storage (YC F24) – Turn S3 into a local-like, POSIX cloud FS

replies(2): >>42176840 #>>42189051 #
143. garganzol ◴[] No.42176764[source]
Super interesting project. But I cannot understand why you support only EC2 instances as clients. For what it is worth, it looks strange and limiting. By default I expect to be able to use Regatta Storage from everywhere: from my local machine, from my Docker containers running elsewhere, etc.
replies(1): >>42176829 #
144. cvalka ◴[] No.42176787[source]
SeaweedFS and GarageFS?
replies(1): >>42176903 #
145. huntaub ◴[] No.42176790[source]
This is exactly right, and something that we think is particularly important for applications that care about data consistency. Often times, we see that customers want to be able to quickly hand off tasks from one instance to another which can be incredibly complex if you don't have guarantees that your new operations will be seen by the second instance!
replies(1): >>42177109 #
146. huntaub ◴[] No.42176804[source]
> It's pure magic when you mount a filesystem on clean machine and all your data is "just there."

I totally agree! I am hoping that Regatta can power a future where teams don't need more than ~8 GiB of local storage for their operating system, and can store the rest on something like Regatta to get rid of the waste of overprovisioned block volumes.

replies(1): >>42177044 #
147. alfalfasprout ◴[] No.42176823[source]
Can you comment on how this is different from https://aws.amazon.com/blogs/aws/mountpoint-for-amazon-s3-ge... ?
replies(1): >>42176847 #
148. huntaub ◴[] No.42176829[source]
This isn't a technical limitation, per se, but a time limitation in terms of getting to the place where we feel comfortable supporting those environments for the public. I still wouldn't recommend mounting it from a local environment (because NFS behaves pretty poorly when it can't connect to the server), but we do have a CSI driver for containers running in K8s. We expect that customers will get the best experience if their instances are very close (latency-wise) to our instances, which is why we only support access from us-east-1 in AWS. We expect to launch in more regions and clouds in the coming months.

If you want early access to other clouds or the CSI driver, feel free to email hleath [at] regattastorage.com.

149. huntaub ◴[] No.42176840[source]
The funny thing about storage is that all of the problems are the same! Ultimately, there is no problem that cannot be solved with caching, journaling, write-ahead logging, etc. I think what makes the problem space so interesting is how a million different products can make a million different trade offs with these tools to deliver on their customer needs. File systems are awesome.
replies(1): >>42177269 #
150. huntaub ◴[] No.42176847[source]
Sure can, full disclosure, copied from a comment below:

Thanks for the question! Mountpoint for Amazon S3 is a FUSE layer that doesn't support full POSIX semantics. For example, you can't use Mountpoint for Amazon S3 for random writes to existing files, appends, or renames. This means that you have to carefully instrument your application to understand whether or not it's compatible with Mountpoint, which can be error-prone. Regatta, on the other hand, provides full POSIX compatibility for the file interface, which means that it works out-of-the-box with all file based applications.

151. bks ◴[] No.42176861[source]
Similar to objectiveFS - we use this in production for email sync between multiple postfix servers and dovecot. Is this a supported use case?
replies(1): >>42176886 #
152. nine_k ◴[] No.42176875{3}[source]
Hiring someone who knows how to manage bare metal (with failover and stuff) may take time %)
replies(1): >>42177892 #
153. huntaub ◴[] No.42176886[source]
There isn't any reason that it shouldn't be a supported use case, depending on your exact performance needs and workflow. It's very similar to ObjectiveFS except that it operates on the data in your S3 bucket in it's native format, so you can point it at existing data sets, and use the newly written data directly from S3.
154. kevitivity ◴[] No.42176894[source]
I know for a while Fuse was considered a security nightmare. My own org banned the use of it. Have things gotten better?
replies(1): >>42176954 #
155. huntaub ◴[] No.42176903[source]
These distributed storage systems solve very similar problems, depending on how you use them. Our target customers aren't looking to deploy their own infrastructure, so having a "single-click" option without having to think about how much capacity they need is very valuable.
156. huntaub ◴[] No.42176954[source]
Huh, that's interesting. I wouldn't imagine that there were security problems specific to FUSE compared to any other software that you would run on your servers. Regardless, I see FUSE as the fastest path to getting our protcol in the hands of our customers. In the fullness of time, I hope that we can deliver it as either a kernel-module or in-tree.
157. TripleChecker ◴[] No.42176985{5}[source]
Appreciate the kind words!
158. freedomben ◴[] No.42177002[source]
Thanks, this was my thought as well. I use and love rclone and it wasn't immediately clear what this offered above that
159. lijok ◴[] No.42177044{3}[source]
That would sell like hot cakes to the public sector.
replies(2): >>42177636 #>>42180422 #
160. Jugurtha ◴[] No.42177108{3}[source]
I think it's complementary as well, even more so after MinIO deprecating its Gateway and Filesystem modes a couple of years ago. MinIO is "S3 compatible" object storage, so technically, MinIO users should be able to use your product to have a file-system like experience on their buckets and objects, although you're using IAM and there might be a need either for your client to handle pure S3 credentials, either for a third-party plugin to your client to do that. It could be a good opportunity to piggyback on MinIO's userbase.

We had built an MLOps platform[0] a few years ago and enabled users to use their S3 buckets in a "file system like" manner. This made it possible for them not to have to know or write S3 specific code in their Jupyter notebooks as most people in the industry did with boto3, which also forced them to write code (say using TensorFlow) in a certain way for training to consume the files, err, objects. It was a mess, and we removed that for notebooks that could run the same way on a laptop or on the platform, even with the shell kernel so people could explore objects like files. MLFlow could work on a filesystem or on S3, but it had no authentication, so we built around that to know which user/experiment produced which artifact.

MinIO had a Gateway that was deprecated. We didn't use it much and they didn't have an admin client at the time, so I rolled one up to orchestrate the thing.

One way I did it that hook into users' compute and storage as opposed to offering storage/compute was for two reasons:

- Organizations already had their data somewhere with established policies. Getting them to move that data is very hard (CISO, CTO, IT, legal, engineers). Friction would have been huge.

- Organizations already had budgeted compute and storage, they may have had contracts/discounts/credits with cloud providers and it didn't make sense to ask them to make a decision on budgeting for another solution.

- A design principle of having the product being able to die without leaving the users scrambling to exfil/migrate data.

One way to do it was to handle FUSE, and your mileage may vary (s3fs-fuse, goofys, etc). Amazon has released Mountpoint last year[1], and one question you'll get asked is why use Regatta when I could use Mountpoint?

Less friction for engineers and execs.

In any way, congratulations on the launch, man!

[0]: https://web.archive.org/web/20230325150132/https://iko.ai/

[1]: https://aws.amazon.com/blogs/aws/mountpoint-for-amazon-s3-ge...

replies(1): >>42177486 #
161. wanderingmind ◴[] No.42177109{3}[source]
Might be useful to show the differences with Rclone, s3fs as a table to make it obvious
replies(1): >>42177304 #
162. secabeen ◴[] No.42177171{3}[source]
Do you pay for metadata accesses? Does running a `find` across the filesystem cost anything? What about system calls that don't transfer data? Can I move or rename a file without paying to copy and then delete the associated S3 object?
replies(1): >>42177334 #
163. bastloing ◴[] No.42177190[source]
That's pretty cool Anybody know of something similar for azure cloud?
replies(1): >>42177671 #
164. jedberg ◴[] No.42177203{4}[source]
That makes a lot of sense. If you eliminate the need for storage expertise the problem becomes a lot easier!

BTW I sent you an email.

165. random3 ◴[] No.42177269{3}[source]
> The funny thing about storage is that all of the problems are the same!

they are all the same and they are all more than what would at the surface seem that it's "just files" the whole OS, especially Linux/UNIX is "just files" and if you look deeper at databases you can see how it boils down to the file formats (something that was visible with LevelDB but maybe less so with RocksDB, I guess)

166. duidiip ◴[] No.42177303[source]
This sounds unnecessary and expensive. Why use this over similar self-managed open source offerings?
replies(2): >>42177312 #>>42178732 #
167. huntaub ◴[] No.42177304{4}[source]
I agree, I plan to put up a table soon.
168. huntaub ◴[] No.42177312[source]
Hey there, thanks for the concern. There are a spectrum of teams out there. Some teams are totally comfortable building something like this and running their own storage infrastructure. Other teams want a fully managed solution to handle storage for them so that they can focus on building. I think it's great that we have a spectrum of products!
169. huntaub ◴[] No.42177334{4}[source]
Today, we only charge for cache usage (storage) and data transfer between Regatta and S3. If your metadata access doesn't require transfer to S3, then it doesn't cost anything! However, renames do require transfer to S3 (because we have to move the object on the backend).
replies(1): >>42181161 #
170. pryelluw ◴[] No.42177457[source]
Careers link points to index page :)
replies(1): >>42177469 #
171. huntaub ◴[] No.42177469[source]
Sorry about that! It's on our list to fix once we're done responding to comments.
172. huntaub ◴[] No.42177486{4}[source]
We are finding a lot of success in the ML Ops space for exactly this reason. I also completely agree that enterprise customers want to keep their data where they can govern and audit it (often in S3). We're excited about the possibility to allow folks to access and use that data while it stays in S3 for primary storage.

I agree around the questions with Mountpoint, and we're solving a very different set of problems than Mountpoint. Mountpoint, for example, isn't designed to be used with all file applications and lacks support for things like appends to existing files, random writes, renames, and symbolic links. On the other hand, Regatta supports POSIX semantics and can work with nearly all file based applications.

173. neeleshs ◴[] No.42177534[source]
Pretty cool. I'm excited about databases using this. Feels like Neon's PostgreSQL storage, but generalized to an FS.

Is this like FUSE with a cache? How does cache invalidation work?

All the best!

replies(1): >>42177618 #
174. gizmo ◴[] No.42177548{3}[source]
Do I understand correctly that the data gets decrypted at your Regatta AWS instances, before the data ends up in the customer's S3 bucket? It sounds like the SSL pipe used for NFS is terminated at Regatta servers. Can customers run the Regatta service on their own hardware?

Or does Regatta only have access to filesystem metadata -- enough to do POSIX stuffs like locks, mv, rm -- but the file contents themselves remain encrypted end-to-end?

replies(1): >>42177649 #
175. huntaub ◴[] No.42177618[source]
Yeah, I like to think of it in a similar vein. We want to empower people to create stateless workflows where they may have previously needed to think about state management. Today, Regatta is an NFS file system where the cache lives on our shared infrastructure. However, when we complete the work on our custom protocol, that will be a FUSE file system which offers additional caching on your instances to enable truly local-like performance.
replies(1): >>42179091 #
176. huntaub ◴[] No.42177636{4}[source]
Let's hope so, I'd love to help teams take storage infrastructure management off of their plate! If you're in the public sector and interested in trying out Regatta, please shoot me an email at hleath [at] regattastorage.com.
177. huntaub ◴[] No.42177649{4}[source]
This is correct, we encrypt data in-transit to the Regatta servers (using TLS), and we encrypt any data that the Regatta servers are storing. Of course, when Regatta communicates with S3, that's also encrypted with TLS (just like using the AWS SDK). However, we don't pass the encrypted data to S3, otherwise you wouldn't be able to read it from the bucket directly and use it in other applications!
178. huntaub ◴[] No.42177662{6}[source]
That's exactly right -- I like to think that we deliver on the promise of those open-source S3 adapters. We provide enterprise-grade performance.
179. erichocean ◴[] No.42177663[source]
In 2024, you are better off dropping the file system abstraction entirely and just embracing object storage abstractions (and ideally, immutable write-once objects).

Source: personal experience, I've done the EFS path and the S3-like path within the same system, and the latter was much easier to develop for and troubleshoot performance. It's also far cheaper to operate.

You can have local caching, rapid "read what I wrote", etc. with very little engineering cost, no one at my company is dedicated to this because the abstraction is ridiculously simple:

1. It's object storage, not a file system. Embrace immutability.

2. When you write to S3, cache locally as well.

3. When you read from S3, check the cache first. Optionally cache locally on reads from S3.

4. Set cache sizes so you don't blow out local storage.

5. Tier your caches when needed to increase sharing. (Immutability makes this trivially safe.)

All that's left is to manage 'checked out files' which is pretty easy when almost all of them are immutable anyway.

replies(1): >>42177703 #
180. huntaub ◴[] No.42177671[source]
We are looking at launching in Azure Cloud with support for Azure Blob Storage as the backend within the next 6 months. If there's a specific use case that you have, it would be helpful to share it with me at hleath [at] regattastorage.com so we can appropriately prioritize Azure against other cloud vendors and regions.
181. huntaub ◴[] No.42177703[source]
I totally agree that we're continuing to see a trend of applications which are designed to work directly on S3.

However, like the S3 protocol, I think that the file protocol is cemented in time as something that we will be using 100 years from now. For example, most AI applications do still download data sets to local file system devices to actually load and use, this is why you see a lot of HPC workloads use things like Lustre. Postgres, SQLite, etc all use file system semantics to operate the database.

I totally respect folks who rewrite their applications to work directly with S3, but as you point out, it comes with a different set of challenges (around caching and chunking).

182. wongarsu ◴[] No.42177892{4}[source]
You pay a datacenter to put it in a rack and add connect power and uplinks, then treat it like a big ec2 instance (minus the built-in firewall). Now you just need someone who knows how to secure an ec2 instance and run your preferred software there (with failover and stuff).

If you run a single-digit number of servers and replace them every 5 years you will probably never get a hardware failure. If you're unlucky and it still happens get someone to diagnose what's wrong, ship replacement parts to the data center and pay their tech to install them in your server.

Bare metal at scale is difficult. A small number of bare metal servers is easy. If your needs are average enough you can even just rent them so you don't have capital costs and aren't responsible for fixing hardware issues.

replies(4): >>42179198 #>>42183127 #>>42184135 #>>42185150 #
183. nixosbestos ◴[] No.42178051[source]
> NFSv3 (soon, our custom protocol).

definitely the thing I want to hear more about. Also, I can't help shake the "what's the catch, how is no one else doing this, or are they doing it quietly?" feeling.

replies(1): >>42178076 #
184. ajbt200128 ◴[] No.42178061[source]
Wondering what the difference is between this and juicefs?
replies(1): >>42178079 #
185. huntaub ◴[] No.42178076[source]
Trust me, I feel the same way. The problem with these things is that you end up building a company because you get so much conviction that what you're doing is the right thing for customers, and you end up shocked that this isn't the default for everyone.
186. huntaub ◴[] No.42178079[source]
Great question! Full disclosure, answer copied from a another comment:

It's similar to JuiceFS, but JuiceFS writes and reads data from S3 in a proprietary block format. This means that you cannot connect JuiceFS to existing data sets in S3, and you cannot use data written through JuiceFS from the S3 API directly. On the other hand, Regatta reads and writes data to S3 using it's native format -- so you can do these things!

187. benatkin ◴[] No.42178556[source]
The headline seems misleading, then.

rclone can work with AWS' different offerings, some of which at least partially address this: https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-z...

replies(1): >>42178584 #
188. huntaub ◴[] No.42178584{3}[source]
I'm not totally sure what you mean. I don't think that S3 Express One Zone offers any additional atomic semantics in the file system world.
replies(1): >>42179130 #
189. senderista ◴[] No.42178704[source]
If this product is successful, what prevents AWS from cloning it at a lower price (perhaps by leveraging access to their infrastructure) and putting you out of business?
replies(1): >>42178911 #
190. dangoodmanUT ◴[] No.42178732[source]
I bet this guy runs his own servers and databases in his basement too, because fk TCO amirite
191. dangoodmanUT ◴[] No.42178821[source]
Because people are excited... All the positive comments didn't tip you off?
replies(1): >>42179244 #
192. gjmveloso ◴[] No.42178897[source]
Feels like FSx for Lustre without the complexity. Definitely what EFS could be.

Congrats on the launch!

replies(1): >>42179192 #
193. harshaw ◴[] No.42178911[source]
There is room in storage for many kind of products with different sets of tradeoffs. And don't underestimate the ability of a startup to move much faster than AWS.
replies(1): >>42179183 #
194. Vishnu3014 ◴[] No.42178966[source]
Great product ! Congratulations on the launch !!
195. hanslovsky ◴[] No.42179003[source]
that looks interesting. we spent a lot of money on FSxL and might save a lot with Weka. Unfortunately, our data access pattern is very random and will likely not benefit from caching unless we cache the entire dataset (100TB)
replies(1): >>42179177 #
196. bx376 ◴[] No.42179090[source]
Sounds similar to https://juicefs.com/
replies(1): >>42179207 #
197. neeleshs ◴[] No.42179091{3}[source]
I am now inspired to build a toy project to learn how it all works!
198. benatkin ◴[] No.42179130{4}[source]
For the misleading part, I probably should have said confusing because I don't think you intended that, I mean that instead of introducing your caching layer you make it about S3, where the Object Storage provider seems totally interchangeable. Though it seems to work for a lot of your audience, from what I can tell from other comments here.

As for Express One Zone providing consistency, it would make more groups of operations consistent, provided that the clients could access the endpoints with low latency. It wouldn't be a guarantee but it would be practical for some applications. It depends on what the problem is - for instance, do you want someone to never see noticeably stale data? I can definitely see that happening with Express One Zone if it's as described.

replies(1): >>42179162 #
199. huntaub ◴[] No.42179162{5}[source]
Yes, I think this is something that I’m actually struggling with. What’s the most exciting part for users? Is it the fact that we’re building a super fast file system or is it that we have this synchronization to S3? Ultimately, there just isn’t space for it all — but I appreciate the feedback.
replies(1): >>42179229 #
200. huntaub ◴[] No.42179177[source]
One thing that we’re considering for these kinds of use cases is providing the ability for users to either (a) request that their data always stays in the hot storage or (b) provide the ability for users to ask us to pre-load their data sets when they begin analysis (and I expect that our large fleet of instances can do this preloading much quicker than any individual instance download). Would either of those options make the product more helpful for you?
201. huntaub ◴[] No.42179183{3}[source]
For the most part, I agree with this! The storage market is big enough for everyone because of all of the different needs that customers have. That said, if we are able to build something that provides EBS-like performance with pay-as-you-go pricing, there are likely large economic reasons why AWS would not chase making that the default.
202. huntaub ◴[] No.42179192[source]
Thank you! That’s exactly my hope! How can we make these technologies as easy as possible for teams to use so that they can build the best <training> <analysis> or whatever applications without becoming storage experts.
203. huntaub ◴[] No.42179207[source]
It is! Let me share some details in differences that I’ve posted elsewhere:

It's similar to JuiceFS, but JuiceFS writes and reads data from S3 in a proprietary block format. This means that you cannot connect JuiceFS to existing data sets in S3, and you cannot use data written through JuiceFS from the S3 API directly. On the other hand, Regatta reads and writes data to S3 using its native format -- so you can do these things!

204. benatkin ◴[] No.42179229{6}[source]
I think they both go together. It might take about 10 minutes to give a good high level explanation of it, including how the S3 syncing works - that the S3 lags slightly behind the caching layer for reads, and that you can still write to S3. 2-way sync. I imagine that S3 would be treated sort of like another client if updates came from S3 and the clients at the same time. It would probably be not so great to write to S3 if you aren't writing to somewhere that's being actively edited, but if you want to write to a dormant area of S3 directly, that's fine.
205. jmspring ◴[] No.42179324[source]
I wish you luck, having looked at doing something similar years back, I don’t see the market. In the case of what I was involved in, it pivoted to enterprise backup.
replies(1): >>42179894 #
206. murilopl ◴[] No.42179377[source]
Congrats! What a great solution, wish you success. NIT: The forced smooth scrolling on the landing page drives me crazy! haha
207. 0x1ceb00da ◴[] No.42179460{3}[source]
Are you planning to support android? How? AFAIK android doesn't have FUSE or NFS.
replies(1): >>42179573 #
208. huntaub ◴[] No.42179573{4}[source]
I don't think that I'm planning support for Android, did I mistakingly mention it somewhere?
209. sidcool ◴[] No.42179770[source]
Are there any tech details/architecture of the system? Also, congrats on launching.
replies(1): >>42179889 #
210. zX41ZdbW ◴[] No.42179784[source]
That is interesting, but I haven't read how it is implemented yet.

The hard part is a cache layer with immediate consistency. It likely requires RAFT (or, otherwise, works incorrectly). Integration of this cache layer with S3 (offloading cold data to S3) is easy (not interesting).

It should not be compared to s3fs, mountpoint, geesefs, etc., because they lack consistency and also slow and also don't support full filesystem semantics, and break often.

It could be compared with AWS EFS. Which is also slow (but I didn't try to tune it up to maximum numbers).

For ClickHouse, this system is unneeded because ClickHouse is already distributed (it supports full replication or shared storage + cache), and it does not require full filesystem semantics (it pairs with blob storages nicely).

replies(2): >>42179882 #>>42188267 #
211. pqdbr ◴[] No.42179804[source]
Hi! Would this work for a instance that uses Batman to backup Postgres servers?
replies(1): >>42179887 #
212. huntaub ◴[] No.42179882[source]
Thanks for the note, great to hear from you! I think that what Clickhouse does is great, and I expect that more applications want to take advantage of the low prices of S3 cold storage without needing to build their own application-level abstractions. I'm hopeful that this allows more of these next-generation serverless data products to exist.
213. huntaub ◴[] No.42179887[source]
Yes! I don't know of any reason why that wouldn't work. I've worked with lots of customers who need simple, low-cost storage for database backups.
214. huntaub ◴[] No.42179889[source]
Thank you! We have docs at https://docs.regattastorage.com. There is an architecture page which might answer your questions. If you have deeper questions, feel free to ask in the thread or shoot me an email at hleath [at] regattastorage.com.
215. huntaub ◴[] No.42179894[source]
Thanks for your note! We're really hopeful that our "local-like performance" is part of the story that distinguishes us from other file system solutions. I envision a world where people don't have to overprovision block storage volumes, and can just use this instead -- with the ability to easiy grab their data from S3.
216. up2isomorphism ◴[] No.42179993[source]
The main reason of adopting object storage is to avoid the burden associated with POSIX file system APIs. And this renders the major motivation using an object storage pointless.

Also using a translation layer on top of S3 will not save your costs.

replies(1): >>42180303 #
217. huntaub ◴[] No.42180303[source]
Hey there, thanks for your note. I think that the answer here (as with all good questions) is "it depends".

I agree with you, Object Storage accels at making the storage interface super simple to use (POSIX is incredibly complex). However, that doesn't change the reality that nearly all software still reads and writes data from a local file system interface.

The specifics of whether or not using a translation layer will save you costs comes down a lot to what you're comparing it to. If you have an EBS volume that's 20% full, then I guarantee you that Regatta's storage costs will be cheaper than EBS, even if you don't ever tier to S3. It's just a cherry on top for workloads which may have unpredictable access patterns and don't want all of their data to be hot when not in use.

218. IgorPartola ◴[] No.42180315[source]
Is this meaningfully different from https://github.com/s3ql/s3ql ?

S3 semantics are generally fairly terrible for file storage (no atomic move/rename is just one example) but using it as block storage a la ZFS is quite clever.

replies(1): >>42180411 #
219. tw04 ◴[] No.42180320[source]
At first glance it’s not clear how this is unique from Nasuni.
replies(1): >>42180426 #
220. amitizle ◴[] No.42180323[source]
Why are these solutions always using NFS? I'm asking out of curiosity, not judgement.

I've looked for a solution to write many small files fast (safely). Think about cloning thr Linux kernel git repo. Whatever I tested, the NFS protocol was always a bottleneck.

replies(1): >>42180421 #
221. huntaub ◴[] No.42180411[source]
Hey, thanks for the question. From what I can tell (and this could be wrong), but it looks like s3ql is using S3 as a block layer. Regatta, on the other hand, allows you to read and write files in their native format. I agree that it's harder to implement than just using S3 for block storage, but I think that it unlocks a lot of potential use cases for customers. With Regatta, we make these semantics performant, which is a huge improvement on the prior art.
222. huntaub ◴[] No.42180421[source]
We choose NFS purely because it's the fastest way to get broad compatibility with most operating systems (NFSv3, for example is supported on both Linux and Windows). However, I have great news for you! We're simultaneously working on a custom protocol (over FUSE today) that is going to solve the small file problem for things like cloning the Linux kernel git repo. You can actually see in our demo video (https://youtu.be/xh1q5p7E4JY?feature=shared&t=170) that we untar the Linux kernel on Regatta in under 12 seconds. We're hopeful that this performance makes file storage useful for a broader set of workloads.
223. shaklee3 ◴[] No.42180422{4}[source]
The public sector is typically air-gapped, so not really.
replies(1): >>42184524 #
224. huntaub ◴[] No.42180426[source]
Thanks for the question. Full disclosure, I'm grabbing this response from another comment:

I have mutual friends with some of the Nasuni folks, and I have a lot of respect for what they do. In particular, Nasuni stores data in a proprietary block format in your S3 bucket, so you can't connect it to existing data sets or use that data directly from S3 out the other side. Whereas with Regatta, we store data in its native format in S3 so you can do these things.

225. daviesliu ◴[] No.42180521[source]
Founder of JuiceFS here, congrats to the Launch! I'm super excited to see more people doing creative things in the using-S3-as-file-system space. When we started JuiceFS back in 2017, applied YC for 2 times but no luck.

We are still working hard on it, hoping that we can help people with different workloads with different tech!

replies(3): >>42180528 #>>42181262 #>>42182079 #
226. huntaub ◴[] No.42180528[source]
Wow, thanks for coming out! I hope that you're heartened to see the number of people who immediately think of JuiceFS when they see our launch. I totally agree with you, storage is such an interesting space to work in, and I'm excited that there are so many great products out there to fit the different needs of customers.
227. mbrt ◴[] No.42180611[source]
Wow, coincidentally I posted GlassBD (https://news.ycombinator.com/item?id=42164058) a couple of days ago. Making S3 strongly consistent is not trivial, so I'm curious about how you achieved this.

If the caching layer can return success before writing through to s3, it means you built a strongly consistent distributed in memory database.

Or, the consistency guarantee is actually less, or data is partitioned and cannot be quickly shared across clients.

I'm really curious to understand how this was implemented.

replies(1): >>42180672 #
228. huntaub ◴[] No.42180672[source]
Hey, thanks for reaching out. The caching layer does return success before writing to S3 -- that's how we get good performance for all operations, including those which aren't possible to do in S3 efficiently (such as random writes, renames, or file appends). Because the caching layer is durable, we can safely asynchronously apply these changes to the S3 bucket. Most operations appear in the S3 bucket within a minute!
replies(2): >>42180782 #>>42187682 #
229. mbrt ◴[] No.42180782{3}[source]
Very nice, I like the approach. I assume data is partitioned and each file is handled by an elected leader? If data is replicated, you still need a consensus algorithm on updates.

How are concurrent updates to the same file handled? Either only one client can open in write at any one time, or you need fencing tokens.

replies(1): >>42180873 #
230. huntaub ◴[] No.42180873{4}[source]
Without getting too much into internals which could change at any time, yes. You have to replicate, partition, and serve consensus over data to achieve high-durability and availability.

For concurrent updates, the standard practice for remote file systems is to use file locking to coordinate concurrent writes. Otherwise, NFS doesn't have any guarantees about WRITE operation ordering. If you're talking about concurrent writes which occur from NFS and S3 simultaneously, this leads to undefined behavior. We think that this is okay if we do a good job at detecting and alerting the user if this occurs because we don't think that there are applications currently written to do this kind of simultaneous data editing (because Regatta didn't exist yet).

replies(1): >>42180979 #
231. highwaylights ◴[] No.42180894[source]
I have a feeling Amazon is about to throw a big bag of money at you and that this will be the fastest acquisition in HN history. Congratulations on your successful launch!
232. mbrt ◴[] No.42180979{5}[source]
Thanks for the details!

Consistency at the individual file can be guaranteed this way, but I don't think this works across multiple files (as you need a global total order of operations). In any case, this is a pragmatic solution, and I like the tradeoffs. Comparing against NFS rather than Spanner seems the right way to look at it.

replies(1): >>42184600 #
233. hades32 ◴[] No.42181154{3}[source]
Wouldn't that limit the concurrency of the lambdas to 1? Since they would hold a lock on the db file
replies(1): >>42184041 #
234. hades32 ◴[] No.42181161{5}[source]
does that mean you pay for the storage twice (i.e. S3 and Regatta) or is the cache size tunable?
replies(1): >>42184615 #
235. gumbojuice ◴[] No.42181262[source]
As someone who is already happily using JuuceFS, perhaps you can provide a short list of differences (conceptual and/or technical). Thanks for a great product.
236. dsvf ◴[] No.42182079[source]
I'm a happy and satisfied JuiceFS user here, so I too would be interested in the difference between these. Is the Regatta key point caching?
replies(2): >>42182252 #>>42185151 #
237. ChocolateGod ◴[] No.42182252{3}[source]
Also a user of JuiceFS, replaced a GlusterFS cluster a few years ago, far cheaper and easier to scale with no issues or changes needed to the applications using GlusterFS.
238. naushniki ◴[] No.42182345[source]
Do you have any relation to https://regatta.dev/ ?
replies(1): >>42184619 #
239. unit149 ◴[] No.42182488[source]
Was taking a look at pricing features - melted down, paying per month doesn't seem like a bad option; still, the API features 1 hour SLA support for enterprise tier subscribers.

S3 bucket systems for cloud hosting services are typically encrypted through AES-256. SSE-S3 or SSE-KMS are available upon request.

[1]: https://aws.amazon.com/blogs/aws/new-amazon-s3-encryption-se...

Having the API hosted on Regatta's servers but integrating a POSIX-compliant bring-your-own compute would tighten up instance storage fees for the end-user.

[1]:https://aws.amazon.com/blogs/aws/new-amazon-s3-encryption-se...

replies(1): >>42184511 #
240. imcritic ◴[] No.42182645[source]
Regatta Storage is a new cloud file system^W service
replies(1): >>42184644 #
241. dheera ◴[] No.42182970[source]
I suppose rclone doesn't provide byte range file locking? Running sqlite over rclone would be a disaster.
replies(2): >>42184404 #>>42188761 #
242. swyx ◴[] No.42183127{5}[source]
sounds like an opportunity for someone (you?) to offer an abstraction slightly above bare metal to do the stuff you said to do, charging higher than bare metal but lower than the other stuff. how much daylight is there between those prices?
replies(1): >>42189476 #
243. siscia ◴[] No.42183182[source]
How do you handle wrote concurrency?

If you different processes write on the same file at the same time, what do I read after?

replies(1): >>42184442 #
244. datadeft ◴[] No.42183210[source]
I am not sure what is the use case for this.

I would love to see the following projects instead:

- exposing a transactional API for S3

- transactional filesystem

replies(1): >>42184560 #
245. kramer2718 ◴[] No.42183351[source]
I realize it isn't your target use case, but I'm tempted to move all of my personal stuff stored in Google Drive over to this.
246. gregw2 ◴[] No.42183374{5}[source]
So Regatta has an in memory cache? Does the posix disk write only suceed when the data is in more than one availability zone?
replies(1): >>42184421 #
247. cuno ◴[] No.42183927[source]
Founder of cunoFS here, brilliant to see lots of activity in this space, and congrats on the launch! As you'll know, there's a whole galaxy of design decisions when building file storage, and as a storage geek it's fun to see what different choices people make!

I see you've made some similar decisions to what we did for similar reasons I think - making sure files are stored 1:1 exactly as an object without some proprietary backend scrambling, offering strong consistency and POSIX semantics on the file storage, with eventual consistency between S3 and POSIX interfaces, and targeting high performance. Looks like we differ on the managed service vs traditional download and install model, and the client-first vs server-first approach (though some of our users also run cunoFS on an NFS/SMB gateway server), and caching is a paid feature for us versus an included feature for yours.

Look forward to meeting and seeing you at storage conferences!

replies(2): >>42184394 #>>42187403 #
248. datadeft ◴[] No.42184041{4}[source]
Well these are the details that many of us is interested.
replies(1): >>42184548 #
249. tempest_ ◴[] No.42184135{5}[source]
We run on our own stuff at our shop.

Some things that are hidden in the cloud providers cost are redundant networking, redundant internet connection, redundant disks.

Likely still cheaper than the cloud obviously but you will need to stomach down time for that stuff if something breaks.

250. huntaub ◴[] No.42184394[source]
Great to hear from you, I think cunoFS is doing a lot of things right! It’s certainly a fun problem space!
251. huntaub ◴[] No.42184404{3}[source]
That would be my expectation, you need something in the middle to actually broker the file locks.
252. juancampa ◴[] No.42184410{3}[source]
We're also interested in SQLite shared by multiple processes on something like Regatta but my concerns are the issues described in the SQLite documentation about NFS [1]. Notably "SQLite relies on exclusive locks for write operations, and those have been known to operate incorrectly for some network filesystems."

[1] https://sqlite.org/useovernet.html

replies(1): >>42184481 #
253. huntaub ◴[] No.42184421{6}[source]
Hey there! Today, we are replicating cache data within a single availability zone, but we’re working on a multi-availability zone product. If you have a need for multi-AZ, please shoot me an email at hleath [at] regattastorage.com, I’d love to learn more
254. huntaub ◴[] No.42184442[source]
All connected file system clients see read-after-write consistency, so you see the up to date file data!
replies(1): >>42186443 #
255. huntaub ◴[] No.42184481{4}[source]
Ah, yes — there are some specific file locking concerns with NFSv3 (notably that locks aren’t built as leases like in NFSv4). Let me do a double click here, but I know we will be able to support locks correctly with our custom protocol when we launch it by the end of the year.
replies(2): >>42185966 #>>42186463 #
256. huntaub ◴[] No.42184511[source]
All data cached in Regatta is also encrypted with AES-256

Re: bring your own compute: It’s certainly something we’re thinking about. We are in discussions with a lot of customers running GPU clusters with orphaned NVMe resources that they would like to install Regatta on. We’d love to get more details on who’s out there looking for this, so please shoot me an email at hleath [at] regattastorage.com

257. huntaub ◴[] No.42184524{5}[source]
I think it depends which part of the public sector! AWS GovCloud is not airgapped, but I certainly know of deployments which are.
258. huntaub ◴[] No.42184548{5}[source]
This is super dependent on the application, and not something that I could answer without being an expert in SQLite. If SQLite only allows a single reader or writer, then yes. This could still be a good choice for applications which elect a “leader” to serve the database, though.
259. huntaub ◴[] No.42184560[source]
I think we’re moving in that direction. I’m really interested to do more in the API space than traditional storage has allowed. Tell me a bit more what you mean by “transactional file system”?
replies(1): >>42196700 #
260. huntaub ◴[] No.42184600{6}[source]
This is actually also interesting, in that I don’t think that the file system paradigm actually requires a global total ordering of operations (and, in fact, many file systems don’t provide this). I know that sounds like snapshots wouldn’t be valid, but I think that applications which really care about data consistency (such as databases) are built specifically to handle this (with things like write-ahead-logs).
261. huntaub ◴[] No.42184615{6}[source]
That’s correct — you pay for the storage yourself in S3, and then you pay for the storage when it’s in the Regatta cache. We may expose the ability to limit the cache size in the future for teams who need controllable costs more than the highest performance.
262. huntaub ◴[] No.42184619[source]
We don’t actually, but thanks for pointing that out!
263. huntaub ◴[] No.42184644[source]
Well, I think this is the benefit that our customers are looking for. They aren’t interested in becoming storage administrators, and running Regatta as a service allows them to not. There are, of course, other teams who do want to do that. It’s great that both kinds of products can exist.
264. kingnothing ◴[] No.42185150{5}[source]
Are you going to risk your entire business over "probably never get a hardware failure" that, if it hits, might result in days of downtime to resolve? I wouldn't.
replies(1): >>42189295 #
265. huntaub ◴[] No.42185151{3}[source]
I know that I've answered this question a couple times in the thread, so I don't know if my words add extra value here. But, I agree that it would be interesting to hear what Davies is thinking.
replies(1): >>42185864 #
266. Melonotromo ◴[] No.42185334[source]
Your pricepoint is very bad. The overprovicioning statement in your Post indicated that you would be a 'cheap' alternative but 100gb for $5?

I'm also not sure that its a good architecture to have your servers inbetween my S3. If i'm on one cloud provider, the traffic between their S3 compatible solution and my infrastructure is most of the time in the same cloud provider. And if not, i will for sure have a local cache rcloning the stuff from left to right.

I also don't get your calculator at all.

replies(1): >>42185428 #
267. huntaub ◴[] No.42185428[source]
Thanks for the feedback. If price is the single blocker for teams to try the product, I'd love to discuss more. Please send me an email at hleath [at] regattastorage.com.

> If i'm on one cloud provider, the traffic between their S3 compatible solution and my infrastructure is most of the time in the same cloud provider

This is exactly right, and it's why we're working to deploy our infrastructure to every major cloud. We don't want customers paying egress costs or cross-cloud latency to use Regatta.

> I also don't get your calculator at all.

This could probably use a bit more explanation on the website. We're comparing to the usage of local devices. We find that, most often, teams will only use 15% of the EBS volumes that they've purchased (over a monthly time period). This means that instead of paying $0.125/GiB-mo of storage (like io2 offers), they're actually paying $0.833/GiB-mo of actual bytes stored ($0.125/15%). Whereas on Regatta, they're only paying for what they use -- which is a combination of our caching layer ($0.20) and S3 ($0.025). That averages out closer to $0.10/GiB stored, depending on the amount of data that you use.

replies(1): >>42185478 #
268. Melonotromo ◴[] No.42185478{3}[source]
What is then your initial latency if i start an AI job 'fresh'? You still need to hit the backend right? How long do you then keep this data in your cache?

Btw. while your experience works well for Netflix, in my company (also very big), we have LoBs and while different teams utilize their storage in a different way, none of us are aligned on a level that we would benefit directly from your solution.

From a pure curiosity point of view: Do you have already enough customers which have savings? What are their use cases? The size of their setups?

replies(1): >>42186239 #
269. dsvf ◴[] No.42185864{4}[source]
Yes, your input into the thread cleared many things up for me, thanks!
270. juancampa ◴[] No.42185966{5}[source]
One more question. How does it handle large files that are frequently modified in arbitrary locations (like a SQLite file)? Will it only upload the "diffs" to S3? I'm guessing it doesn't have to scan the whole file to determine what's changed since it can keep track of what's "dirty".

I ask because last time I checked, S3 wouldn't let you "patch" an object. So you'd have to push the diff as separate objects and then "reconstruct" the original file client-side as different chunks are read, right?

replies(1): >>42186260 #
271. huntaub ◴[] No.42186239{4}[source]
> What is then your initial latency if i start an AI job 'fresh'? You still need to hit the backend right? How long do you then keep this data in your cache?

That's correct, and it's something that we can tune if there's a specific need. For AI use cases specifically, we're working on adding functionality to "pre-load" the cache with your data. For example, you would be able to call an API that says "I'm about to start a job and I need this directory on the cache". We would then be able to fan out our infrastructure to download that data very quickly (think hundreds of GiB/s) -- much faster than any individual instance could download the data. Then your job would be able to access the data set at low-latency. Does that sound like it would make sense for you?

> Btw. while your experience works well for Netflix, in my company (also very big), we have LoBs and while different teams utilize their storage in a different way, none of us are aligned on a level that we would benefit directly from your solution.

I'm not totally sure what you mean here. I don't anticipate that a large organization would have to 100% buy-in to Regatta in order to get benefits. In fact, this is the reason why we are so intent on having a serverless product that "scales to 0". That would allow each of your teams to independently try Regatta without needing to spend hundreds of thousands of dollars on something Day 1 for the entire company.

> From a pure curiosity point of view: Do you have already enough customers which have savings? What are their use cases? The size of their setups?

These are pretty intimate details about the business, and I don't think I can share very specific data. However, yes -- we do have customers who are realizing massive savings (50%+) over their existing set ups.

272. huntaub ◴[] No.42186260{6}[source]
That's correct re: the S3 API. What we do is we "merge" multiple write requests together to minimize the cost to you and the number of requests to S3. For example, if you write a file 1,000 times in the span of a minute, we would merge that into a single PutObject request to S3. Of course, we force flush the data every few minutes (even if it's being written frequently) in order to make sure that there's an up-to-date copy in S3.
273. mdaniel ◴[] No.42186443{3}[source]
I heard you about the "limited hands, infinite wishlist" but nowadays when I see someone making bold claims about transactions and consistency over the network, I grab my popcorn bucket and eagerly await the Jepsen report about it

The good news is that you, personally, don't have to spend the time to create the Jepsen test harness, you can pay them to run the test but I have no idea what kind of O($) we're talking here. Still, it could be worth it to inspire confidence, and is almost an imperative if you're going to (ahem) roll your own protocol for network file access :-/

replies(2): >>42186798 #>>42188537 #
274. mdaniel ◴[] No.42186463{5}[source]
I would really enjoy hearing why SMBv4 or the hundreds of other protocols are somehow insufficient for your needs. The thought of "how hard can a custom protocol be?!" makes me shudder, to say nothing of the burden -- ours and yours -- of maintaining endpoint implementations for all the bazillions of places one would want to consume a network mount
replies(1): >>42187584 #
275. boulos ◴[] No.42186660[source]
I love this space, and I have tried and failed to get cloud providers to work on it directly :). We could not get the Avere folks to admit that their block-based thing on object store was a mistake, but they were also the only real game in town.

That said, I feel like writeback caching is a bit ... risky? That is, you aren't treating the object store as the source of truth. If your caching layer goes down after a write is ack'ed but before it's "replicated" to S3, people lose their data, right?

I think you'll end up wanting to offer customers the ability to do strongly-consistent writes (and cache invalidation). You'll also likely end up wanting to add operator control for "oh and don't cache these, just pass through to the backing store" (e.g., some final output that isn't intended to get reused anytime soon).

Finally, don't sleep on NFSv4.1! It ticks a bunch of compliance boxes for various industries, and then they will pay you :). Supporting FUSE is great for folks who can do it, but you'd want them to start by just pointing their NFS client at you, then "upgrading" to FUSE for better performance.

replies(1): >>42186783 #
276. huntaub ◴[] No.42186783[source]
> That is, you aren't treating the object store as the source of truth. If your caching layer goes down after a write is ack'ed but before it's "replicated" to S3, people lose their data, right?

This is exactly why we're building our caching layer to be highly-durable, like S3 itself. We will make sure that the data in the cache is safe, even if servers go down. This is what gives us the confidence to respond to the client before the data is in S3. The big difference between the data living in our cache and the data living in S3 is cost and performance, not necessarily durability.

> I think you'll end up wanting to offer customers the ability to do strongly-consistent writes (and cache invalidation). You'll also likely end up wanting to add operator control for "oh and don't cache these, just pass through to the backing store" (e.g., some final output that isn't intended to get reused anytime soon).

I think this is exactly right. I think that storage systems are too often too hands off about the data (oh, give us the bytes and we will store them for you). I believe that there are gains to be had by asking the users to tell you more about what they're doing. If you have a directory which is only used to read files and a directory which is only used to write files, then you probably want to have different cache strategies for those directories? I believe we can deliver this with good enough UX for most people to use.

> Finally, don't sleep on NFSv4.1! It ticks a bunch of compliance boxes for various industries, and then they will pay you :). Supporting FUSE is great for folks who can do it, but you'd want them to start by just pointing their NFS client at you, then "upgrading" to FUSE for better performance.

I certainly don't, and this is why we are supporting NFSv3 right now. That's not going away any time soon. We want to offer something that's highly compatible with the industry at large today (NFS-based, we can talk specifics about whether or not that should be v3 or v4) and then something that is high-performance for the early adopters who can use something like FUSE. I think that both things are required to get the breadth of customers that we're looking for.

277. huntaub ◴[] No.42186798{4}[source]
We've actually been thinking about getting Jepsen to do this, so I'm happy to hear that you also think that it would inspire confidence!
replies(1): >>42187606 #
278. Andys ◴[] No.42187403[source]
Is that Gweo? Didn't know you were in the storage space, good to see you!
279. huntaub ◴[] No.42187584{6}[source]
Ultimately, we're just working on a different problem space than these protocols. That's not to say that all of the existing protocols are bad, I absolutely believe that these protocols are great. Our ultimate goal, though, is to replace block storage, with a file-layer protocol. This sort of requires different semantics than what the existing file protocols support.

I don't at all disagree that it's a hard problem! That's part of what makes it so fun to work on.

280. ignoramous ◴[] No.42187606{5}[source]
That's exactly right!
281. ignoramous ◴[] No.42187682{3}[source]
Regatta is a write-through cache for s3 bucket under its supervision? I guess then external changes to that bucket is a no-no?

Any plans to expand to other stores, like R2 (I ask since unlike S3, R2 egress is free)?

replies(1): >>42187873 #
282. rkunnamp ◴[] No.42187696{5}[source]
Plus 1 for GCP. Cant wait to try it out.
283. ignoramous ◴[] No.42187812[source]
> given what I know...

Given Hunter worked at AWS, I bet they are way too familiar with IAD.

284. huntaub ◴[] No.42187873{4}[source]
Hey there, that's sort of the correct way to think about it -- notably that our caching layer is high-durability, so we can keep recent writes in the cache safely. External changes to the bucket are okay! Lots of customers need to (for example) ingest data into S3, then process it on a file system, and that totally works. The only thing that isn't supported is editing the same file from both S3 and the file system simultaneously. We think this is a super rare case, and probably doesn't exist today (because there isn't anything that bridges S3 and file semantics yet).

We support all S3-compatible storage services today, including R2, GCS, and MinIO.

replies(1): >>42191314 #
285. itissid ◴[] No.42187886[source]
Noob Question: When an average person buys 2TB storage from a cloud provider one pays upfront for the entire thing. Would pricing for a product be made more competitive(vs dropbox) using such a solution?

It takes somtimes years to fill it up with photos, vidoes and other documents. Sounds like one could build a great killer low amortized – pay as you fill it up – service for people to compete with dropbox.

replies(1): >>42187965 #
286. huntaub ◴[] No.42187965[source]
This is true, and I think that there are consumer (or at least "run on your laptop") versions of this that could make sense. However, the technology underneath would have to look very different. For example, these protocols are designed for online file systems (e.g. you must be connected to the file system directly in order to list what's in a directory). This works great in a data center, but doesn't work great on your laptop.

On the other hand, something like Dropbox is actually a program running on your laptop that simulates a file system, and then does the synchronization at the file level as needed. I think that there's probably some latent demand for a similar product for developers to access their S3 buckets easily from their laptops, and it's something we might look into as we get farther along.

287. dangoodmanUT ◴[] No.42188267[source]
there are more ways to achieve consensus than raft
288. craigkilgo ◴[] No.42188429[source]
How did you choose the name Regatta?
replies(1): >>42190146 #
289. SteveNuts ◴[] No.42188465[source]
Any plans to support on-prem object stores and not just S3?
replies(1): >>42190156 #
290. aloukissas ◴[] No.42188489[source]
This is fantastic! Interestingly, I was one of the early engineers at Maginatics [1], a company that built exactly this in 2011 - and Netflix was one of our earliest beta customers. We strived to be both SMB3 and POSIX compatible, but leaning into SMB3 semantics. We had some pretty great optimizations that gave almost local disk performance (e.g. using file and directory leases [2], async metadata ops, data and metadata caching, etc). EFS was just coming out at that point (Azure I think also had something similar in the works).

I'll be looking closely in what you're building!

[1] https://www.dell.com/en-us/blog/welcoming-spanning-maginatic...

[2] https://www.slideshare.net/slideshow/maginatics-sdcdwl/39257...

replies(1): >>42189699 #
291. geertj ◴[] No.42188537{4}[source]
> I grab my popcorn bucket and eagerly await the Jepsen report about it

I am the same, as distributed consensus is notoriously hard especially when it fronts distributed storage.

However, it is not imposssible.. Hunter and I were both in the EFS team at AWS (I am still there), and he was deeply involved in all aspects of our consensus and replication layers. So if anyone can do it, Hunter is!

replies(1): >>42190148 #
292. garganzol ◴[] No.42188761{3}[source]
Running sqlite over rclone is not a disaster as long as you run only a single instance working with that database. Rclone provides no support for locking semantics.
293. mikeshi42 ◴[] No.42189051[source]
wow gmail drive is a walk down memory lane :) really is amazing how far we've come since then!
294. weinzierl ◴[] No.42189101{5}[source]
First of all databases don't support running on NFS. It is an unsupported configuration.

The deeper reason for that is, that the consistency guarantees from NFS (close-to-open consistency) are a lot weaker than what you get from POSIX.

replies(1): >>42192720 #
295. nine_k ◴[] No.42189295{6}[source]
Just pay 2x for the hardware and have a hot standby, 1990s-style. Practice switching between the boxes every month or so; should be imperceptible for the customers and a nearly non-event for the ops.
replies(1): >>42196263 #
296. nthh ◴[] No.42189476{6}[source]
I'm sure there are companies in this space providing private clouds on bare metal, I wonder how that would be to operate at scale though.
297. nthh ◴[] No.42189500{3}[source]
This is compelling but it would be useful to compare upfront costs here. Investing $20,000+ in a server isn't feasible for many. I'd also be curious to know how much a failsafe (perhaps "heatable" cold storage, at least for the example) would cost.
298. huntaub ◴[] No.42189699[source]
Awesome! Great to meet you, so happy that so many folks in the space are here.
299. objectivefs ◴[] No.42189747[source]
Congratulations on your launch from ObjectiveFS! There is a lot of interest in 1-to-1 filesystems for mixed workloads, hope you can capture a nice share of that.

Using NFS and being able to use an existing bucket is a nice way to make it easy to get started and try things out. For applications that need full consistency between the S3 and the filesystem view, you can even provide an S3 proxy endpoint on your durable cache that removes any synchronization delays.

replies(1): >>42190111 #
300. huntaub ◴[] No.42190111[source]
Thank you so much! It’s been amazing to see what you all have built over the years, and it’s (of course) been inspirational for me.
301. huntaub ◴[] No.42190146[source]
I spent several years working in the Northeast, and I developed an appreciation (but not a skill) for sailing. In some sense, I think of Regatta as high-speed sailing on top of customer data lakes.
302. huntaub ◴[] No.42190148{5}[source]
Thank you for the kind words, Geert!
303. huntaub ◴[] No.42190156[source]
It’s not as clear, but it’s certainly something we are considering. If you’d like to use us on-prem, I’d love to hear more. Can you shoot me an email with details at hleath [at] regattastorage.com?
304. nisten ◴[] No.42191010[source]
Ok that's cool but like... you could've just given me a bashscript to do the same thing instead of the pitchdeck-followup baggage of the n-th try at recreating the dropbox lottery shot from a decade and a half ago...
replies(1): >>42192681 #
305. TheTaytay ◴[] No.42191049{3}[source]
First, Regatta sounds extremely helpful, and I’ve enjoyed reading your responses.

Responding here to say that I’d love to hear more about your comparison to FlexFS. In fact, I’d love to see a few of them: FlexFS, MountPoint, etc

Lastly, I couldn’t get the privacy policy to load on your site (I’m on mobile if that helps)

replies(1): >>42192749 #
306. ignoramous ◴[] No.42191314{5}[source]
I actually asked about R2 to see if Regatta's pricing is any different as there's no egress fee. I should have been clearer.

btw, thanks a bunch for answering my Q & everyone else's too (except for parts where you couldn't talk about the implementation, understandably so). Appreciate it. Wishing the best.

307. huntaub ◴[] No.42192681[source]
That’s true, we could just release the software open source, but that doesn’t help our customers who don’t want to run and manage their own infrastructure. Our customers tell us that the value of the product comes from it being fully managed — they simply need to click a button, and all of this works out of the box.
308. huntaub ◴[] No.42192720{6}[source]
I don’t know if I agree, for example, Postgres has this [1] to say about using NFS as the backing store. I think that part of the challenge is that there are so many implementation details that differ between NFS servers and many configuration options that teams can fiddle with (Postgres specifically calls out “async” as dangerous). Close to open semantics are actually stronger than what something like XFS offers (because XFS isn’t required to flush data to disk on file close), and databases should be fsyncing their write ahead logs from the application layer. Like said though, this doesn’t mean that there aren’t certain configurations of NFS which won’t work (async for example means that NFS servers won’t actually write to non-volatile storage on fsync, which is of course dangerous for any application).

[1] https://www.postgresql.org/docs/current/creating-cluster.htm...

309. huntaub ◴[] No.42192749{4}[source]
Thank you for the note. I’d recommend checking out this section of our docs [1], where we are trying to compile some of this comparison. I haven’t called out FlexFS specifically, but I’ll work on adding that soon. We’ll also get the Privacy Policy fixed today, thanks for pointing that out.

[1] https://docs.regattastorage.com/details/architecture

310. kingnothing ◴[] No.42196263{7}[source]
How many hours of labor does that take every month you failover? What about hot hard drive spares? Do you want networking redundancy? How about data backups? Second set of hot servers in another physical data center?

All of that costs money and time. You're probably better off using cloud hosting and focusing on your unique offering than having that expertise and coordination in house.

311. datadeft ◴[] No.42196700{3}[source]
Curently when we use filesystems we actually rely on kernel functionality to have persistence. Relying on syncing[1] can introduce interesting bugs. I can imagine a scenario where the FS has an API that is actually transactional and we can use that to transactionally mutate the content of files. instead of relying on fsync.

      1. fsync, fdatasync - synchronize a file's in-core state with
       storage device