Reputation: 131
I am looking for a good way to transfer non-trivial (10G > x >10MB) amounts of data from one machine to another, potentially over multiple sessions.
I have looked briefly at
Are there any other protocols out there that might fit the bill a little better? Most of the above are not very fault tolerant in and of themselves, but rather rely on client/server apps to pick up the slack. At this stage I care much more about the protocol itself, rather than a particular client/server implementation that works well.
(And yea I know I can write my own over udp, but I'd prefer almost anything else!!)
Upvotes: 4
Views: 873
Reputation: 49189
GridFTP is what Argonne is using to transport huge amounts of data reliably.
Upvotes: 1
Reputation: 19782
Well, HTTP is a good option, in that it supports restarting partial transfers by using byte ranges. FTP or TFTP are good because you can get server software that's extremely simple to configure, rather than having to lock down something like an HTTP server.
Upvotes: 1
Reputation: 3776
I use rsync (over SSH) to transfer anything that I think might take more than a minute.
It's easy to rate-limit, suspend/resume and get progress reports. You can automate it with SSH keys. It's (usually) already installed (on *nix boxes, anyway).
Depending on what you need, rsync can probably adapt. If you're distributing to a lot of users, FTP/HTTP might be better for firewall concerns; but rsync is great for one-to-one or one-to-a-few transfers.
Upvotes: 5
Reputation: 62603
rsync is almost always the best bet.
since it transfers only differences, if the transfer is interrupted, the next time it won't be so different as the first one (when there wasn't a file at destination)
Upvotes: 4
Reputation: 180023
BitTorrent doesn't require a big seed network to be effective - it'll work just fine with one seeder and one peer. There's a little bit of overhead setting up a tracker etc., but once set up it'd be a nice, zippy, fault-tolerant method of transferring.
Upvotes: 2