bk.
bk.

Reputation: 6298

Git + a large data set?

We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it.

All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with?

I want the data in the repository:

However, I don't want the data in the git repository:

It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.

Upvotes: 22

Views: 5991

Answers (6)

serv-inc
serv-inc

Reputation: 38297

While the other answers provide useful hints, another tool to solve an aspect of this:

git cloning a spare copy (so I have two versions in my home directory) will pull a few GB of data I already have. I'd rather either have it in a fixed location [set a rule that data must be in ~/data] or add links as needed.

is git-worktree. With it, you can have several directories in your home directory, e.g. pointing to different branches, but only have the repository data structure once in your home drive. This also useful for developing several features/bugfix branches in parallel (if switching branches breaks your IDE for example).

Upvotes: 0

eric
eric

Reputation: 8108

I recommend Git Large File Storage which integrates seamlessly into the git ecosystem. It sets up text pointers to large files but doesn't export them to your repository.

After installing (https://packagecloud.io/github/git-lfs/install), you can set it up in your local repo with git lfs install. And then using it is easy. Tell it what types of files you want to track (git lfs track "*.gz"), make sure you are tracking .gitattributes, and it should just work.

Upvotes: 2

j-i-l
j-i-l

Reputation: 10987

As an alternative, the data could reside in an untracked (by git) folder that is synced by a p2p service. We use this solution for a dataset of several tens of GB's and it works quite nicely.

  • The dataset is shared directly between the peers.
  • Depending on the p2p software older versions can be kept and restored.
  • The dataset will automatically be up to date in case of changes.

syncthing is the software we use.

Upvotes: 1

Adam Dymitruk
Adam Dymitruk

Reputation: 129762

use submodules to isolate your giant files from your source code. More on that here:

http://git-scm.com/book/en/v2/Git-Tools-Submodules

The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc.

You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data.

Sometimes it's not even worth while tracking changes to such things.

To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze.

The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it.

Hope this helps.

Upvotes: 16

sehe
sehe

Reputation: 393839

Git BUP claims to do a good job with incrementally backing up large files.

I think BUP assumes a separate repository to do it's work so you'd end up using submodules anyway. However, if you want good bandwidth reduction this is the thing

Upvotes: 1

adl
adl

Reputation: 16054

This sounds like the perfect occasion to try git-annex:

git-annex allows managing files with git, without checking the file contents into git. While that may seem paradoxical, it is useful when dealing with files larger than git can currently easily handle, whether due to limitations in memory, checksumming time, or disk space.

Upvotes: 10

Related Questions