Reputation: 1364
Look at this picture showing gitlab ce memory consumption.
I really dont need all of those workers, sidekiq or unicorn or all of those daemon. This is on IDLE. I mean, I installed this to manage 1 project, with like 4 people, I dont need all those daemon. Is there any way to reduce this ?
Upvotes: 39
Views: 61663
Reputation: 1880
Fast forward to 2022, my GitLab v15 instance was using up its entire allotment of memory. I checked & tested some of the recommendations from this guide: Running GitLab in a memory-constrained environment. The changes that in my case reduced memory usage were:
################################################################################
## GitLab Puma
################################################################################
puma['worker_timeout'] = 120
puma['worker_processes'] = 1
################################################################################
## GitLab Sidekiq
################################################################################
sidekiq['max_concurrency'] = 10
I verified the effectiveness of the changes by checking the Service Level Indicators metrics in Grafana's dashboard (/-grafana).
Upvotes: 7
Reputation: 420
I'm running gitlab-ce on Raspberry 4B 8GB.
Setting:
sidekiq['max_concurrency'] =4
postgresql['shared_buffers'] = "256MB"
Did help.
Upvotes: 2
Reputation: 31
I had the same problem: a vanilla Gitlab on vanilla Ubuntu 20.04, would last maybe a day before crashing without any load. Bare metal EPYC, 8c /16t and 64 GB of RAM.
Postgresql was taking its 15G share as mentioned by BrokenBinary's answer, but even "fixing" that to 2G did not suffice.
I also had to fix the amount of Puma workers:
puma['worker_processes'] = 2
It seems that newer Gitlab installations will have memory leaks using the replacement for unicorn, which had memory leaks.
Update: Crashed again. Next try:
sidekiq['max_concurrency'] = 6
sidekiq['min_concurrency'] = 2
Upvotes: 3
Reputation: 481
When I changed the /etc/gitlab/gitlab.rb as mentioned in other answers it did not worked for me.
This is what I did, I edited the following file:
/var/opt/gitlab/gitlab-rails/etc/unicorn.rb
(Perhaps the path to the file in your machine is different)
And changed worker_processes 9
to worker_processes 2
.
Upvotes: 1
Reputation: 51463
I also had problems with gitlab's high memory consumption. So I ran the linux tool htop
.
In my case I found out that the postgresl service used most of the memory.
With postgres service running 14.5G of 16G were used
I stopped one gitlab service after the other and found out that when I stop postgres a lot of memory was freed.
You can try it
gitlab-ctl stop postgresql
and start the service again with
gitlab-ctl start postgresql
Finally I came across the following configuration in /etc/gitlab/gitlab.rb
##! **recommend value is 1/4 of total RAM, up to 14GB.**
# postgresql['shared_buffers'] = "256MB"
I just set the shared buffers to 256MB by removing the comment #
, because 256MB is sufficient for me.
postgresql['shared_buffers'] = "256MB"
and executed gitlab-ctl reconfigure
. gitlab-ctl restarts the affected services and the memory consumption is now very moderate.
Hopefully that helps someone else.
Upvotes: 43
Reputation: 31
I have already fixed this case.
Which used the most memory is the unicorn!
My gitlab's version was "GitLab Community Edition 10.6.3".
And it was deploied on my server , it's cpu , INTEL Core i5 8400 for six cores.
So gitlab allocate 7 progresses for unicorn, each progress occuped 6% mem.
Method:
vim /var/opt/gitlab/gitlab-rails/etc/unicorn.rb
How to edit the unicorn.rb
Edit and Save changes.
And execute "gitlab-ctl restart unicorn"
The htop behind unicorn.rb changes
Upvotes: 3
Reputation: 7899
From your image it looks like Sidekiq and all its workers are using a total sum of 257mb of memory, which is normal. Remember that all the Sidekiq workers use the same memory pool, so they're using 257mb total, not 257mb each. As you've seen from your own answer, decreasing the number of Sidekiq workers will not drastically decrease the memory usage, but will cause background jobs to take longer because they have to wait around for a Sidekiq process to be available. I would leave this value at the default, but if you really want to decrease it then I wouldn't decrease it below 4 since you have 4 cores.
The Unicorn processes also share a memory pool, but each worker has 1 pool that is shared between its 2 processes. In your original image it looks like you have 5 workers, which is recommended for a 4 core system, and each is using about ~250mb of memory. You shouldn't notice any performance differences if you decreased the number of workers to 3.
Also, you might want to read this doc on how to configure Unicorn. You definitely don't want the number of workers to be less than 2 because it causes issues when editing files from within the GitLab UI, as discussed here, and it also disables cloning over HTTPS according to this quote from the doc I linked:
With one Unicorn worker only git over ssh access will work because the git over HTTP access requires two running workers (one worker to receive the user request and one worker for the authorization check).
Finally, recent versions of GitLab seem to allocate more memory to the postgresql database cache. I'd recommend configuring this property postgresql['shared_buffers']
in /etc/gitlab/gitlab.rb
to be 1/4 of your total free RAM. See René Link's answer below for more information on that.
Upvotes: 20
Reputation: 201
Since GitLab 9.0, prometheus is enabled by default which I noticed was using a lot of memory over 1.5GB in my case, this can be disabled with prometheus_monitoring['enable'] = false
Upvotes: 20
Reputation: 1364
2 Options I found browsing the gitlab.rb
sidekiq['concurrency'] = 1 #25 is the default
unicorn['worker_processes'] = 1 #2 is the default
And this which needs understanding according to their warning:
## Only change these settings if you understand well what they mean
## see https://about.gitlab.com/2015/06/05/how-gitlab-uses-unicorn-and- unicorn-worker-killer/
## and https://github.com/kzk/unicorn-worker-killer
# unicorn['worker_memory_limit_min'] = "300*(1024**2)"
# unicorn['worker_memory_limit_max'] = "350*(1024**2)"
This is after config modifications
Still WAY too much in my opinion.
Upvotes: 10