Most active commenters
  • dmpetrov(3)

←back to thread

213 points shcheklein | 15 comments | | HN request time: 0.205s | source | bottom
1. dmpetrov ◴[] No.41890616[source]
hi there! Maintainer and author here. Excited to see DVC on the front page!

Happy to answer any questions about DVC and our sister project DataChain https://github.com/iterative/datachain that does data versioning with a bit different assumptions: no file copy and built-in data transformations.

replies(3): >>41890932 #>>41896923 #>>41897005 #
2. ajoseps ◴[] No.41890932[source]
if the data files are all just text files, what are the differences between DVC and using plain git?
replies(3): >>41891059 #>>41891080 #>>41893500 #
3. dmpetrov ◴[] No.41891059[source]
In this cases, you need DVC if:

1. File are too large for Git and Git LFS.

2. You prefer using S3/GCS/Azure as a storage.

3. You need to track transformations/piplines on the file - clean up text file, train mode, etc.

Otherwise, vanilla Git may be sufficient.

4. miki123211 ◴[] No.41891080[source]
DVC does a lot more than git.

It essentially makes sure that your results can reproducibly be generated from your original data. If any script or data file is changed, the parts of your pipeline that depend on it, possibly recursively, get re-run and the relevant results get updated automatically.

There's no chance of e.g. changing the structure of your original dataset slightly, forgetting to regenerate one of the intermediate models by accident, not noticing that the script to regenerate it doesn't work any more due to the new dataset structure, and then getting reminded a year later when moving to a new computer and trying to regen everything from scratch.

It's a lot like Unix make, but with the ability to keep track of different git branches and the data / intermediates they need, which saves you from needing to regen everything every time you make a new checkout, lets you easily exchange large datasets with teammates etc.

In theory, you could store everything in git, but then every time you made a small change to your scripts that e.g. changed the way some model works and slightly adjusted a score for each of ten million rows, your diff would be 10m LOC, and all versions of that dataset would be stored in your repo, forever, making it unbelievably large.

replies(3): >>41891756 #>>41894861 #>>41895262 #
5. azinman2 ◴[] No.41891756{3}[source]
So where do the adjusted 10M rows live instead? S3?
replies(1): >>41892535 #
6. thangngoc89 ◴[] No.41892535{4}[source]
DVC support multiple remotes. S3 is one of them, there are also WebDAV, local FS, Google Drive, and a bunch of others. You could see the full list here [0]. Disclaimer: not affiliated with DVC in anyway, just a user.

[0] https://dvc.org/doc/user-guide/data-management/remote-storag...

7. agile-gift0262 ◴[] No.41893500[source]
It's not just to manage file versioning. Yo can define a pipeline with different stages, the dependencies and outputs of each stage and DVC will figure out which stages need running depending on what dependencies have changed. Stages can also output metrics and plots, and DVC has utilities to expose, explore and compare those.
8. woodglyst ◴[] No.41894861{3}[source]
This sounds a lot like the experimental project Jacquard [0] from Ink & Switch.

[0] https://www.inkandswitch.com/jacquard/notebook/

9. amelius ◴[] No.41895262{3}[source]
Sounds like it is more a framework than a tool.

Not everybody wants a framework.

replies(2): >>41895874 #>>41896912 #
10. JadeNB ◴[] No.41895874{4}[source]
> Sounds like it is more a framework than a tool.

> Not everybody wants a framework.

The second part of this comment seems strange to me. Surely nothing on Hacker News is shared with the expectation that it will be interesting, or useful, to everyone. Equally, surely there are some people on HN who will be interested in a framework, even if it might be too heavy for other people.

replies(1): >>41896274 #
11. amelius ◴[] No.41896274{5}[source]
Just saying that what makes Git so appealing is that it does one thing well, and from this view DVC seems to be in an entirely different category.
12. stochastastic ◴[] No.41896912{4}[source]
It doesn’t force you to use any of the extra functionality. My team has been using it just for the version control part for a couple years and it has worked great.
13. johanneskanybal ◴[] No.41896923[source]
Mostly consult as a data engineer not ML ops but I’m interested in some aspects of this. We have 10 years of parquet files from 300+ different kafka topic and we’re currently migrating to apache iceberg. We’ll back fill on a need only basis and it would be nice to track that with git. Would this be a good fit for that?

Another potential aspect would be tracking schema evolution in a nicer way than we currently do.

thx in advance, huge fan of anything-as-code and think it’s a great fit for data (20+ years in this area).

14. stochastastic ◴[] No.41897005[source]
Thanks for making and sharing DVC! It’s been a big help.

Is there any support that would be helpful? I’ll look at the project page too.

replies(1): >>41897163 #
15. dmpetrov ◴[] No.41897163[source]
Thank you!

Just shoot an email to support and mention HN. I’ll read and reply.