Reputation: 934
i have a very cool gitlab setup here:
however, when doing a 'git clone' or 'git pull' it fails for repositories > 10 Mib in size.
ubuntu~/Projects/git/myRepo(master|✔) % git pull
Username for 'https://example.org': [email protected]
Password for 'https://[email protected]@example.org':
remote: Counting objects: 7798, done.
remote: Compressing objects: 100% (4132/4132), done.
fatal: The remote end hung up unexpectedlyiB | 222 KiB/s
fatal: early EOF
fatal: index-pack failed
it seems that it is able to copy about 8Mib of data and runs for about 30 seconds at max. the problem is reproducible every time and shows the same signs of malfunction over and over.
i have read: http://jinsucraft.wordpress.com/2013/01/08/when-github-git-clone-fails-with-early-eof-and-index-pack-failed/ and tried:
git config --global http.postBuffer 524288000
on the client to no avail.
anyone an idea what could cause this?
Upvotes: 3
Views: 3514
Reputation: 1010
The cause of this problem can be a timeout issue (or similar limit, e.g. amount of data): A server-side timeout occurs, which closes the http connection, resulting in the client-side "early EOF" error message. Such timeouts can be configured in several places (I'm listing them here because other web servers may have similar settings, so they might give you a hint where to look):
Increasing the timeout in the Unicorn config should solve your problem. Keep in mind that the number of parallel requests is also limited by Unicorn. Cloning a large repo blocks one request, but causes almost no CPU load. If your gitlab server has no high traffic profile, you should consider increasing the worker_process
number.
As a sidenote: The gitlab.yml
configuration also offers a git timeout; this timeout limits git operations like calculating the diff of several commits. It has no impact on the timeout when cloning/pulling.
Upvotes: 7